Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

Signaling the Future with Intel® RealSense™ and GestureWorks Fusion*

$
0
0

Signs of the Times: Gesture Control Evolves

The not-so-humble mouse has been around commercially for over 30 years. While it may seem hard to imagine a world without it and the trusty keyboard, our style and models for interacting with computer systems are evolving. We no longer want to be tethered to our devices when collaborating in a shared workspace, for example, or simply when sitting on the couch watching a movie. We want the freedom to control our systems and applications using a more accommodating, intuitive mode of expression. Fortunately, consumer-grade personal computers with the necessary resources and capabilities are now widely available to realize this vision.

Gesture control has found a natural home in gaming, with Intel® RealSense™ technology at the forefront of these innovations. It was only a matter of time before developers looked for a way to integrate gesture control with a desktop metaphor, complementing the familiar keyboard and mouse with an advanced system of gesture and voice commands. Imagine the possibilities. You could start or stop a movie just by saying so, and pause and rewind with a simple set of gestures. Or you could manipulate a complex 3D computer aided design (CAD) object on a wall-mounted screen directly using your hands, passing the item to a colleague for their input.

That’s the vision of Ideum, a Corrales, New Mexico-based company that creates state-of-the-art user interaction systems. The company got its start over 15 years ago designing and implementing multi-touch tables, kiosks, and touch wall products. Its installations can be found in leading institutions such as Chicago’s Field Museum of Natural History, the Smithsonian National Museum of the American Indian, and the San Francisco Museum of Modern Art. To develop its latest initiative, GestureWorks Fusion*, Ideum turned to Intel RealSense technology.

With GestureWorks Fusion, Ideum aims to bring the convenience and simplicity of voice- and gesture-control to a range of desktop applications, beginning with streaming media. The challenges and opportunities Ideum encountered highlight issues that are likely to be common to developers looking to blaze a new trail in Human Computer Interaction (HCI).

This case study introduces GestureWorks Fusion and describes how the application uses advanced multi-modal input to create a powerful and intuitive system capable of interpreting voice and gesture commands. The study illustrates how the Ideum team used the Intel® RealSense™ SDK and highlights the innovative Cursor Mode capability that allows developers to quickly and easily interact with legacy applications designed for the keyboard and mouse. The article also outlines some of the challenges the designers and developers faced and provides an overview of how Ideum addressed the issues using a combination of Intel- and Ideum-developed technologies.

Introducing GestureWorks Fusion*

GestureWorks Fusion is an application that works with an Intel® RealSense™ camera (SR300) to capture multi-modal input, such as gestures and voice controls. The initial version of the software allows users to intuitively and naturally interact with streaming media web sites such as YouTube*. Using familiar graphical user interface (GUI) controls, users can play, pause, rewind, and scrub through media—all without touching a mouse, keyboard, or screen. Direct user feedback makes the system easy to use and understand.

GestureWorks Fusion* makes it fun and easy to enjoy streaming video web sites, such as YouTube*, using intuitive voice and gesture commands on systems equipped with an Intel® RealSense™ camera SR300.
GestureWorks Fusion* makes it fun and easy to enjoy streaming video web sites, such as YouTube*, using intuitive voice and gesture commands on systems equipped with an Intel® RealSense™ camera (SR300).

The Intel RealSense camera SR300 follows on from the Intel RealSense camera (F200), which was one of the world’s first and smallest integrated 3D depth and 2D camera modules. Like the Intel RealSense camera (F200), the Intel RealSense camera (SR300) features a 1080p HD camera with enhanced 3D- and 2D-imaging, and improvements in the effective usable range. Combined with a microphone, the camera is ideal for both head- and hand-tracking, as well as for facial recognition. “What’s really compelling is that the Intel RealSense camera (SR300) can do all this simultaneously, very quickly, and extremely reliably,” explained Paul Lacey, chief technical officer at Ideum and director of the team responsible for the development of GestureWorks.

GestureWorks Fusion builds on the technology and experience of two existing Ideum products: GestureWorks Core and GestureWorks Gameplay 3. GestureWorks Gameplay 3 is a Microsoft Windows* application that provides touch controls for popular PC games. Gamers can create their own touch controls, share them with others, or download controls created by the community.

GestureWorks Core, meanwhile, is a multi-modal interaction engine that performs full 3D head- and hand-motion gesture analysis, and offers multi-touch and voice interaction. The GestureWorks Core SDK features over 300 prebuilt gestures and supports the most common programming languages, including C++, C#, Java*, and Python*.

GestureWorks Fusion was initially designed to work with Google Chrome* and Microsoft Internet Explorer* browsers, running on Microsoft Windows 10. However, Ideum envisions GestureWorks Fusion working with any system equipped with an Intel RealSense camera. The company also plans to expand the system to work with a range of additional applications, such as games, productivity tools, and presentation software.

Facing the Challenges

Ideum faced a number of challenges in making GestureWorks Fusion intuitive and easy-to-use, especially for new users receiving minimal guidance. Based on its experiences developing multi-touch tables and touch wall systems for public institutions, the company knew that users can become frustrated when things don’t work as expected. This knowledge persuaded the designers to keep the set of possible input gestures as simple as possible, focusing on the most familiar behaviors.

GestureWorks* Fusion features a simple set of gestures that map directly to the application user interface, offering touchless access to popular existing applications.
GestureWorks* Fusion features a simple set of gestures that map directly to the application user interface, offering touchless access to popular existing applications.

Operating system and browser limitations presented the next set of challenges. Current web browsers, in particular, are not optimized for multi-modal input. This can make it difficult to identify the user’s focus, for instance, which is the location on the screen where the user intends to act. It also disrupts fluidity of movement between different segments of the interface, and even from one web site to another. At the same time, Ideum realized that it couldn’t simply abandon scrolling and clicking, which are deeply ingrained in the desktop metaphor and are at the core of practically all modern applications.

Further, an intuitive ability to engage and disengage gesture modality is critical for this type of interface. Unlike a person’s deeply-intuitive sense of when a gesture is relevant, an application needs context and guidance. In GestureWorks Fusion, raising a hand into the camera’s view enables the gesture interface. Similarly, dropping a hand from view causes the gesture interface to disappear, much like a mouse hover presents additional information to users.

The nature of multi-modal input itself presented its own set of programming issues that influenced the way Ideum architected and implemented the software. For example, Ideum offers a voice command for every gesture, which can present potential conflicts. “Multi-modal input has to be carefully crafted to ensure success,” explained Lacey.

A factor that proved equally important was response time, which needed to be in line with standards already defined for mice and keyboards (otherwise, a huge burden is placed on the user to constantly correct interactions). This means that response times need to be less than 30 milliseconds, ideally approaching something closer to 6 milliseconds—a number that Lacey described as the “Holy Grail of Human Computer Interaction.”

Finally, Ideum faced the question of customization. For GestureWorks Fusion, the company chose to perform much of this implicitly, behind the scenes. “The system automatically adapts and makes changes, subtly improving the user experience as people use the product,” explained Lacey.

Using the Intel® RealSense™ SDK

Developers can access the Intel RealSense camera (SR300) features using the Intel RealSense SDK, which offers a standardized interface to a rich library of pattern detection and recognition algorithms. These cover several helpful functions, including face recognition, gesture and speech recognition, and text-to-speech processing.

The system is divided into a set of modules to help developers focus on different aspects of the interaction. Certain components, such as the SenseManager interface, coordinate common functions including hand- and face-tracking and operate by orchestrating a multi-modal pipeline controlling I/O and processing. Other elements, such as the Capture and Image interfaces, enable developers to keep track of camera operations and to access captured images. Similarly, interfaces such as HandModule, FaceModule, and AudioSource offer access to hand- and face-tracking, and to audio input, respectively.

The Intel RealSense SDK encourages seamless integration by supporting multiple coding styles and methodologies. It does this by providing wrappers for several popular languages, frameworks, and game engines—such as C++, C#, Unity*, Processing, and Java. The Intel RealSense SDK also offers limited support for browser applications using JavaScript*. The Intel RealSense SDK aims to lower the barrier to performing advanced HCI, allowing developers to shift their attention from coding pattern recognition algorithms to using the library to develop leading-edge experiences.

“Intel has done a great job in lowering the cost of development,” noted Lacey. “By shouldering much of the burden of guaranteeing inputs and performing gesture recognition, they have made the job a lot easier for developers, allowing them to take on new HCI projects with confidence.”

Crafting the Solution

Ideum adopted a number of innovative tactics when developing GestureWorks Fusion. Consider the issue of determining the user’s focus. Ideum approached the issue using an ingenious new feature called Cursor Mode, introduced in the Intel RealSense SDK 2016 R1 for Windows. Cursor Mode provides a fast and accurate way to track a single point that represents the general position of a hand. This enables the system to effortlessly support a small set of gestures such as clicking, opening and closing a hand, and circling in either direction. In effect, Cursor Mode solves the user-focus issue by having the system interpret gesture input much as it would the input from a mouse.

Using the ingenious Cursor Mode available in the Intel® RealSense™ SDK, developers can easily simulate common desktop actions such as clicking a mouse.
Using the ingenious Cursor Mode available in the Intel® RealSense™ SDK, developers can easily simulate common desktop actions such as clicking a mouse.

Using these gestures, users can then accurately navigate or control an application “in-air” without having to touch a keyboard, mouse, or screen, while providing the same degree of confidence and precision. Cursor Mode helps in other ways as well. “One of the things we discovered is that not everyone gestures in exactly the same way,” said Lacey. Cursor Mode helps by mapping similar gestures to the same context, improving overall reliability.

Lacey also highlighted the ease with which Ideum was able to integrate Cursor Mode into existing prototypes, permitting developers to get new versions of GestureWorks Fusion up and running in a matter of hours, with just a few lines of code. For instance, GestureWorks uses Cursor Mode to get the cursor image coordinates and then synthesize mouse events, as shown in the following:

// Get the cursor image coordinates
PXCMPoint3DF32 position = HandModule.cursor.QueryCursorPointImage();

// Synthesize a mouse movement
mouse_event (
   0x0001,
                                   // MOUSEEVENTF_MOVE
   (uint)(position.x previousPosition.x),     // dx
   (uint)(position.y previousPosition.y),     // dy
   0,
                               // dwData flags empty
   0                                          // dwExtraInfo flags empty
};

...


// Import for calls to unmanaged WIN32 API
[DllImport("user32.dll", CharSet = CharSet.Auto,
   CallingConvention = CallingConvention.StdCall)]

public static extern void mouse_event(uint dwFlags, uint dx, uint dy,
   uint cButtons, int dwExtraInfo);

Following this, GestureWorks is able to quickly determine which window has focus using the standard Windows API.

// Get the handle of the window with focus

IntPtr activeWindow = GetForegroundWindow();

// Create a WINDOWINFO structure object

WINDOWINFO info = new WINDOWINFO(); GetWindowInfo(activeWindow, ref info);

// Get the actiive window text to compare with pre-configured controllers
StringBuilder builder = new StringBuilder(256);
GetWindowText(activeWindow, builder, 256);

...

// Import for calls to unmanaged WIN32 API
[DllImport("user32.dll")]

static extern IntPtr GetForegroundWindow();

[DllImport("user32.dll")]

static extern int GetWindowText(IntPtr hWnd, StringBuilder builder,
   int count);

Cursor Mode tracks twice as fast as full hand-tracking, while using about half the power. “A great user experience is about generating expected results in a very predictable way,” explained Lacey. “When you have a very high level of gesture confidence, it enables you to focus and fine-tune other areas of the experience, lowering development costs and letting you do more with less resources.”

To support multi-modal input, GestureWorks leverages the Microsoft Speech Application Programming Interface (SAPI) using features that include partial hypothesis, which are unavailable in the Intel RealSense SDK. This allows a voice command to accompany every gesture, as shown in the following code segment:

IspRecognizer* recognizer;
ISpRecoContext* context;

// Initialize SAPI and set the grammar

...


// Create the recognition context
recognizer>CreateRecoContext(&context);

// Create flags for the hypothesis and recognition events
ULONGLONG recognition_event = SPFEI(SPEI_RECOGNITION) |
   SPFEI(SPEI_HYPOTHESIS);

// Inform SAPI about the events to which we want to subscribe context>SetInterest(recognition_event, recognition_event);

// Begin voice recognition
<recognition code …>

Ideum also found itself turning to parallelization to help determine a user’s intent, allowing interactions and feedback to occur near-simultaneously at rates of 60 frames per second. “The linchpin for keeping response times low has been our ability to effectively use  multi-threading capabilities,” said Lacey. “That has given us the confidence to really push the envelope, to do things that we weren’t entirely sure were even possible while maintaining low levels of latency.”

Ideum also strove to more completely describe and formalize gesture-based interactions by developing an advanced XML configuration script called Gesture Markup Language (GML). Using GML, the company has created a comprehensive library of gestures that developers can use to solve HCI problems. This has helped Ideum manage and control the inherent complexity of gesture recognition, since the range of inputs from motion tracking and multi-touch can potentially result in thousands of variations.

“The impact of multi-modal interactions together with the Intel RealSense camera can be summed up in a single word: context,” noted Lacey. “It allows us to discern a new level of context that dramatically opens new realms for HCI.”

Moving Forward

Ideum plans to extend GestureWorks Fusion, adding support for additional applications—including productivity software, graphic packages, and computer-aided design using 3D motion gestures to manipulate virtual objects. Lacey can also imagine GestureWorks appearing in Intel RealSense technology-equipped tablets, home media systems, and possibly even in automobiles, as well as in conjunction with other technologies—applications that are far beyond traditional desktop and laptop devices.

More expansive and immersive environments are similarly on the horizon, including virtual, augmented, and mixed-reality systems. This also applies to Internet of Things (IoT) technology, where new models of interaction will encourage users to create their own unique spaces and blended experiences.

“Our work on GestureWorks Fusion has begun to uncover new ways to interact in novel environments,” Lacey explained. “But whatever the setting, you should simply be able to gesture or talk to a gadget, and make very deliberate selections, without having to operate the device like a traditional computer.”

Resources

Visit the Intel Developer Zone to get started with Intel RealSense technology.

Learn more about Ideum, developer of GestureWorks.

Download the Intel® RealSense™ SDK at https://software.intel.com/en-us/intel-realsense-sdk .


Hybrid Parallelism: Parallel Distributed Memory and Shared Memory Computing

$
0
0

There are two principal methods of parallel computing: distributed memory computing and shared memory computing. As more processor cores are dedicated to large clusters solving scientific and engineering problems, hybrid programming techniques combining the best of distributed and shared memory programs are becoming more popular. This trend has been accelerated with the Intel® Xeon Phi™ processor line; placing 244 virtual cores has encouraged many developers to transition to hybrid programming techniques.

This article begins by reviewing shared memory programming techniques, and then distributed memory MPI programming. Finally, it discusses hybrid shared memory/distributed memory programming and includes an example.

Shared Memory Computing

Large symmetric multi-processor systems offered more compute resources to solve large computationally intense problems. Scientists and engineers threaded their software to solve problems faster than they could on single processors systems. Multi-core processors made the advantages of threading ubiquitous. By applying two processors or two cores, a problem theoretically could be solved in half the time; with eight processors or cores, 1/8 the time would be the maximum attainable. The opportunities merited changes to software to take advantage of the compute resources in multi-processor and multi-core systems. Threading is the most popular shared memory programming technique. In the threading model, all the resources belong to the same process. Each thread has its own address pointer and stack, yet they share a common address space and system resources. The common shared memory access makes it easy for a developer to divide up work, tasks, and data. The disadvantage is that because all resources are available to all threads, this allows for data races.

A data race occurs when two or more threads access the same memory address and at least one of the threads alters the value in memory. The results of the computation can be altered depending on whether the writing thread completes its write before or after the reading thread reads the value. Mutexes, barriers, and locks were designed to control execution flow, protect memory, and prevent data races. This creates other problems as a deadlock can happen preventing any forward progression in the code, or contention for mutexes or locks restricts execution flow, which becomes a bottleneck. Mutexes and locks are not a simple cure-all. If not used correctly, there can still be data races. Placing locks around a code segment rather than around specific memory references is the most common error. In addition, tracking design of the thread flow through all the mutexes, locks, and barriers becomes complicated and difficult for developers to maintain and understand, especially with multiple shared objects or dynamically linked libraries. Threading abstractions were designed to ease the programming and control.

Threading Abstractions

The most popular higher-level threading abstraction in the engineering and science communities is OpenMP*. The original OpenMP design was built on a fork-join parallel structure where work was forked off over the thread pool in a parallel region and joined back together in a sequential region and possibly repeating the fork to a parallel region and joining again one or more times. This provided a common thread pool and avoided the overhead cost of creating and destroying new threads for every task.

In a thread pool, the threads are created and remain until the program ends. Do loops or for loops are the most common usage model in OpenMP. When a for or DO loop is marked as a parallel region, the OpenMP runtime library will automatically decompose the loop into tasks for each OpenMP thread to execute. An example is shown in Table 1. Threading abstractions such as OpenMP made threaded programming easier to track, understand, and maintain.

// A pragma defines beginning of a parallel region and specifies iterations of the following for loop should be spread across openmp threads and is the extent of the parallel region

#pragma omp parallel for
for (i=0; i < n; i++)
{
  . . . ;
  computations to be completed ;
  . . . ;
}

C$ directive defines beginning and end of a
C$ parallel region. Iterations (I) are
C$ spread across openmp threads

!$ OMP PARALLEL DO
DO I=1,N
  . . .
  computation work is completed
  . . .
ENDDO
!$ OMP END PARALLEL DO

Table 1: Example of OpenMP* parallel region in C and Fortran*

Some of the newer OpenMP constructs include parallel tasks as well as the capability to assign work to a coprocessor or accelerator. Newer threading abstractions introduced include Intel® Threading Building Blocks (Intel® TBB). While OpenMP works with both C/C++ and Fortran*, Intel TBB is based on C++ templates, a generic programming model, so it is limited to C++. Because Intel TBB contains a rich feature set, while still providing a clear abstraction that is easy to follow, it has become quite popular with C++ programmers.

Another threading abstraction, Intel® Cilk™ Plus, relies on tighter compiler integration and as such offers compelling performance for some cases. Intel Cilk Plus relies on a parallel fork-join construct. Nested parallelism is natural for Intel Cilk Plus and Intel TBB, compared to OpenMP where nested parallelism must be identified and declared by the developer. This makes Intel TBB and Intel Cilk Plus ideal to include in libraries. Intel now offers both OpenMP (traditional) and Intel TBB versions of its Intel® Math Kernel Library (Intel® MKL) (Intel TBB was introduced as an option in Intel MKL 11.3).

Popular Threading AbstractionsGeneral Properties
OpenMP*

Structured fork-join parallel model, supports parallel for, tasks, sections
Supports offload
Nested parallelism to be explicitly identified
C, C++, and Fortran*

Intel® Threading Building Blocks

Supports parallel for, pipelines, general graphs and dependencies, tasks, optimized reader/writer locks and more
Nested/recursive parallelism natural
Template-based; C++ only

Intel® Cilk™ Plus

Structured fork-join parallel model
Supports parallel for and fork commands
Nested/recursive parallelism natural
C/C++

Table 2: Popular threading models.

Distributed Memory Programming

Many applications sought greater computational power than what was available in a single multi-processor system. This led to connecting several systems together to form clusters of computers to work together to solve a single computational workload. These systems frequently linked the systems together with a proprietary “fabric,” with each platform in the cluster having its own private memory area within the cluster. The program had to explicitly define the data to be shared with another platform and deliver it to the other platform in the cluster.

With the distributed computing approach, explicit message passing programs were written. In this approach a program explicitly packaged data and sent it to another system in the cluster. The other system had to explicitly request the data and bring the data into its process. An approach linking workstations called parallel virtual machines was developed, allowing programs to work across a network of workstations. Each vendor with a proprietary fabric supported its own message passing library.

The community quickly consolidated around the Message Passing Interface (MPI). MPI required programmers to explicitly handle the decomposition of the problem across the cluster as well as make sure messages were sent and received in the proper order. This approach increased the size of scientific problems that could be solved as there was no longer a single-system memory limitation, and the computational power of many systems combined allowed problems previously too large, too complex or too computationally demanding to be solved. A major addition to MPI was the support for one-sided data movement or remote direct memory access. This allows data movement so that the sending process and the receiving process do not need to synchronize over their calls to send messages and receive messages. The shared memory regions (called windows) for data access must be explicitly set up and defined.

MPI message passing does not lend itself well to incremental parallel programming. Once you begin distributing memory across remote systems there is no reason to go through the work to bring it back to a central platform for the next phase of the application, so MPI programs typically take more upfront work to get running than some other models like OpenMP. However, there is no indication that a performance-tuned fully parallel program is written faster in OpenMP than MPI. Some prefer to program first in MPI, and then convert the MPI to threads if they want a threaded version. Developing in MPI forces a developer to really think about the parallelism and consider the parallel architecture or parallel design of their application and make sure the parallel code is designed well from the beginning.

An advantage of MPI programming is that an application is no longer limited to the amount of memory on one system or the number of processors and cores on one system. An additional advantage is that it requires developers to create a good decomposition of the data and the program. MPI does not really experience data races as in threads, but a developer could still write an MPI program that deadlocks where all the MPI processes are waiting for an event or message that will not happen due to poorly planned dependencies. In MPI, the developer explicitly writes the send and receive messages. Figure 1 shows how this might be done.


Figure 1: MPI processes for send/receive data

Since MPI processes can execute as multiple processes on the same platform or spread across multiple platforms, it would seem a developer could just program in MPI and it would run whether on a single platform or on multiple platforms. MPI library memory consumption is an important consideration. The MPI runtime library maintains buffers for controlling messages being sent and received. As the number of MPI processes increases, the MPI library must be prepared to send or receive from any other process in the application. Configuring for ever-increasing numbers of processes means the runtime library consumes more memory and needs to track more message destinations and receipt locations.

As cluster sizes increased MPI developers recognized the memory expansion growth of the MPI runtime library. A further complication is that as memory consumed by the MPI library increases, memory consumption is replicated on every system in the cluster, as this is not shared space. The article in reference shows how MPI libraries were improved to minimize unnecessary memory consumption 1.

The Intel® Xeon Phi™ coprocessor (code-named KNC), with over 60 processors with four-way symmetric multithreading (SMT) is designed to have about 240 active threads or processes. If a developer only uses MPI and runs an MPI application across four cards, this will result in 960 MPI processes. Based on the charts in the referenced paper1, this will take about 50 MB of memory per MPI rank or about 48 GB total (or 12 GB/card). Granted the data used above are considered to be the top end of possible MPI memory consumption. So consider the case where the MPI library only needs to use 34 percent of the possible 50 MB—about 17 MB. Let’s consider that maybe an application only uses three of the possible SMT on KNC—720 MPI processes each requiring 17 MB of memory for a total of 12.2 GB of memory, or 3 GB per card. The top KNC cards only have 16 GB of memory. For problems well below the worst-case memory consumption, the MPI library will consume 3 GB out of 16 GB or over 1/8 of the memory and that doesn’t include the memory consumed by the OS, the binary, and other services. This greatly reduces the amount of memory available for data for the problem being solved. This memory consumption of the MPI libraries is one driving factor for the movement to hybrid programming. Pavan Balaji wrote As the amount of memory per core decreases, applications will be increasingly motivated to use a shared-memory programming model on multicore nodes, while continuing to use MPI for communication among address spaces.”

The community is well aware of this potential memory consumption and the steps required 3,4.

When combining threading and MPI into a single program, developers also need to be aware of thread safety and awareness of the MPI libraries. The MPI standard specifies four models for threaded software:  

  • MPI_THREAD_SINGLE. Code is sequential; only one thread is running (if all MPI calls are in sequential regions, this works with OpenMP).
  • MPI_THREAD_FUNNELED. Only one thread makes any calls into the MPI library. For OpenMP, this means that calls can be made inside a parallel region, but the OpenMP omp master directives/pragma should be used to ensure that the master thread makes all the MPI calls.
  • MPI_THREAD_SERIALIZED. All threads may make MPI library calls, but the developer placed controls so that only one thread is active in an MPI call at any given time.
  • MPI_THREAD_MULTIPLE. Any thread may make any MPI call at any time.

The hybrid code I have reviewed always uses the first model: MPI_THREAD_SINGLE. The MPI invocations occur in sequential regions of the MPI code. It is easier to use the first three models above. The model MPI_THREAD_MULTIPLE requires more consideration as messages are to MPI processes and not to threads. If two threads are sending and receiving messages to the same MPI processes, the code must be designed such that the order of the MPI messages will be correct regardless of which thread makes the send/receive call, in any order from either end of two points (receiver or sender).

There is extra overhead when MPI_THREAD_MULTIPLE is used for message passing. Data measured on Linux* clusters reports these difference may be minimal 5. If the MPI_THREAD_MULTIPLE has good design behind it, it may work well. Note that the report above is benchmark data, not application data.

Hybrid Example Code

The NAS Parallel Benchmarks* provide widely available reference implementations of sample parallel codes 7. The Multi-Zone versions of the NAS Parallel Benchmarks include hybrid MPI/OpenMP reference implementations. The goal of the Multi-Zone port was to better reflect fluid dynamics code in use such as OpenFlow*. The code was modified so that a solution is completed in each zone independently, and then zone boundary values are exchanged across all zones at each time step. The local zone solution boundary exchange is repeated for each time step. For this article, results were collected for the class C problem size on an Intel Xeon Phi coprocessor card with various configurations of MPI processes and OpenMP threads. The class C problem size is small enough to run on one process on a single KNC core and also large enough to run on 240 cores. Running one MPI rank and varying number of threads showed fair scalability. Holding the effort to one thread and increasing the number of MPI processes showed better scalability. This is not a reflection of shared memory versus distributed memory programming; it is an artifact of the parallelism level.

In a report using an SMP/MLP model (yet another parallel programming model using a multi-process shared memory region) comparing MPI with one OpenMP thread to SMP/MLP with one OpenMP thread, the SMP/MLP code gave slightly higher performance than the MPI for SP-MZ class C problems 6. Thus there is no inherent performance superiority of shared memory versus distributed memory programming styles. For the SP-MZ hybrid MPI/OpenMP decomposition running on an Intel Xeon Phi coprocessor, the results showed MPI scaling better than the hybrid until about 100 MPI processes. After this the hybrid MPI/OpenMP decompositions showed better results.

If the graph (see Figure 2) were to begin from 1 MPI/1 OpenMP thread, the time to run sequentially creates such a large range that the differences for the number of threads/processes are indistinguishable. For this reason, the displayed chart begins at 50 total threads. The total number of threads is equal to the number of MPI processes multiplied by the number of OpenMP threads per process.


Figure 2: Performance depending on ratio and number of MPI processes to OpenMP* threads.

The best performance was achieved for seven OpenMP threads with 32 MPI processes (224 threads total). The SP-MZ code allows for lots of independent computation with well-defined message interfaces. Applications with more message passing may have a different optimum ratio (1:2, 1:4, 1:7, 1:8, and so on). Developers should look for thread work load balance as well as MPI work load balance to collect data and measure to determine the best ratio. The important take away from the chart is at some point the code stopped running faster just by increasing MPI processes, but performance continued to improve performance using a hybrid model.

In this NAS SP-MZ code, the parallelism is at two different levels. That is, the OpenMP parallelism is nested below the MPI parallelism. An alternative would be to put the OpenMP parallelism at the same level as the MPI parallelism. It is anticipated that there exists cases for which one design or the other will be superior.  There may also be cases where some threads may be at the same level as the MPI processes for tasks and each of these higher level workers employs multiple worker threads at a low level.

Ideally there would be a set of rules and data measurements to guide the developer through hybrid programming. The development environment is not at that level yet. Developers are recommended to exercise good judgment and good design. Sometimes OpenMP was added to codes for parallelism in an opportunistic fashion rather than following a good design pattern (that is, OpenMP pragmas were placed where DO or for loops were found rather than by considering how to best parallelize the code).  Good parallel design will be required for good scaling. Good parallel design can be expressed in MPI or a threading method. There are many tools to assist the developer 8,9. Intel® Advisor XE provides a means to recognize, design, and model parallelism. Intel® VTune™ Amplifier XE measures the performance and behavior of OpenMP parallel regions. Intel® Trace Analyzer displays MPI performance analysis information to understand behavior of MPI code behavior. TAU Performance System* and ParaProf* also collect performance data and events for OpenMP and MPI and display performance data. All of these tools can help developers understand performance of their code to improve design and performance.

Summary

As developers continue to seek performance they are encouraged to explore hybrid programming techniques, which offer the opportunity to improve resource consumption, especially memory. Multiple levels of parallelism like that shown in SP-MZ are also likely to produce better performance. The ratio of MPI processes and threads may be application-dependent and will require testing and evaluation. The software should be written in a way that allows the number of threads to be controlled without recompiling (threading abstractions do this). OpenMP and Intel TBB are the most commonly used threading abstractions. Developers should adopt hybrid parallel programming models and move forward to continue delivering more performance.

References

1 Durnov, D. and Steyer, M. Intel MPI Memory Consumption.
2 Balaji, P. et al. MPI on a Million Processors.
3 Goodell, D., Gropp, W., Zhao, X., and Thakur, R. Scalable Memory Use in MPI: A Case Study with MPICH2.
4 Thakur, R. MPI at Exascale.
5 Thakur, R. and Gropp, W. Test Suite for Evaluating Performance of MPI Implementations That Support MPI THREAD MULTIPLE.
6 Jin, H. and Van der Wijngart, R. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks.
NAS Parallel Benchmarks
8 Intel VTune Amplifier XE, Intel Advisor XE, Intel MKL, and Intel Trace Analyzer are all available in the Intel® Parallel Studio XE Cluster edition
9 TAU Performance System and ParaProf are available from tau.uoregon.edu

 

Tips & Tricks to Heterogenous Programming with OpenCL* SDK & Intel® Media SDK - June 16 Webinar

$
0
0

Register Now    10 a.m., Pacific time

Intel® Processor Graphics contain two types of media accelerators: fixed function codec/frame processing and execution units (EUs), used for general purpose compute. In this 1-hour webinar on June 16, learn how to more fully utilize these media accelerators by combining the Intel® Media SDK and Intel® SDK for OpenCL™ Applications for many tasks, including:

  • Applying video effects and filters
  • Accelerating computer vision pipelines
  • Improving encode/transcode quality

These two tools, both part of Intel® Media Server Studio, are better when used together. With just a few tips, tricks, and sharing APIs you can unlock the full heterogeneous potential of your hardware to create high performance custom pipelines. Then differentiate your media applications and solutions by combining fixed function operations with your own algorithms, to achieve disruptive performance beyond the standard Media SDK capabilities with the secret element that makes your products competitive and unique.

In this session you will learn:

  • Big performance boosts are possible with Intel graphics processors (GPUs)
  • How to build media/graphics processing pipelines containing standard components, and customize with your algorithms and solutions
  • A short list of steps to share video surfaces efficiently between the Media SDK and OpenCL
  • How to combine Intel Media SDK and OpenCL to do many useful things utilizing Gen Graphics' rapidly increasing capabilities
  • And more

Sign up today

Webinar Speakers

  • Jeff McAllister– Media Software Technical Consulting Engineer
  • Robert Ioffe - Technical Consulting Engineer & OpenCL* Expert

 

 

 

Intel® RealSense™ Camera R200 Enhanced Photography Code Sample

$
0
0

Download Code Samples ZIP 1.07MB

Contents

Introduction

In this document and sample application, I will show you how to use the Intel® RealSense™ camera (R200) and the Enhanced Photography functionality that is part of the Intel® RealSense™ SDK.

Requirements

Hardware requirements:

  • 4th generation Intel® Core™ processors based on the Intel® microarchitecture code-named Haswell
  • 8 GB free hard disk space
  • Intel RealSense camera (R200)
  • USB* 3 port for camera connection

Software requirements:

  • Microsoft Windows* 8.1 or higher OS 64-bit
  • Microsoft Visual Studio* 2010-2015 with the latest service pack
  • Microsoft .NET* 4.0 Framework for C# development
  • Intel RealSense SDK, which can be downloaded here

Note: This particular sample project was created in Visual Studio 2015 using the latest .NET release.

Project Structure

In this sample application, I have separated the Intel RealSense SDK functionality from the GUI layer code to make it easier for a developer to focus on the R200 Enhanced Photography functionality. To do this, I created the following C# wrapper classes:

Also, for simplicity I created individual WinForms for each type of Enhanced Photography function. While this creates a little more duplicated code, it offers the benefit of keeping things precise in trying to demo each Enhanced Photography functionality.

While there is more to the project structure than what I have mentioned here, I will go into full detail on all the source code files and how they work throughout this document.

I make use of creating my own events to demonstrate how to keep the Intel RealSense SDK functionality loosely coupled from the GUI source code as much as possible. I think this is a cleaner solution than passing an entire form into a class so that a given class can access properties on the form.

Also note that this sample application does not try to enforce proper software engineering techniques. There is little if any runtime checking, no try catch blocks. Providing a simple, clean example project to learn from keeps the code as clean as possible without introducing extra distractions.

While not elegant, the forms in this sample application serve the purpose of demonstrating how to use Enhanced Photography.

Visual Studio Project Structure

The image above shows what the Visual Studio 2015 solution looks like. The various folders contain:

  • Forms. The various WinForms that demonstrate a different Enhanced Photography functionality.
  • Source. The source code that goes along with the project.
  • Source\CustomEventArgs. Contains classes that have been derived from the native EventArgs class.
  • MainForm.CS. The main form of the application.

Simple high-level sequence diagram

When you run the sample application, FormMain will display. On this form you can start streaming by clicking the Streaming button . When this button is clicked, the application kicks off the streaming by making use of functionality wrapped up in RSStreaming class.

The RSStreaming class is constantly causing updates to FormMain by calling its internal event OnNewStreamingSample. This happens for every frame that comes from the camera.

You can stop streaming by clicking the Stop Streaming button . When you do, the streaming simply stops running, and there are no other options available other than to start streaming again. However, if you click the Photo button , the image data is saved to disk and streaming stops. Once streaming stops and the photo has been saved to disk, the Photo Enhancement buttons becomes active, enabling you to select from among the various Enhanced Photography dialogs that demonstrate the capabilities included in this sample application.

When one of the Enhanced Photo dialogs is selected, the sample image that was saved to disk is loaded and utilized. I will explain this in more detail later in this document.

Code Walkthrough

The following sections will walk you through the entire application, focusing on the flow of the application and descriptions of the various classes.

High-level overview of the source code files and form

Forms

FormDepthEnhance. Demonstrates how to use two different depth quality settings to display depth data. The user can choose either Low Quality or High Quality.

FormDepthPasteOnPlane. This form demonstrates how to use the paste on plane functionality to import an external image by clicking two points on a flat surface.

FormDepthRefocus. This form shows how to click a point and have the focus point on an image brought to light by blurring the rest of the image. You can click a spot on the image which then becomes the focus point. You can adjust the simulated aperture of the camera lens by moving the slider.

FormDepthResize. Shows the resize functionality that can upsize the depth image to be the same size as the RGB image.

FormMeasure. Demonstrates how to use the Enhanced Photography measure capabilities to obtain distance, precision, and confidence values.

FormMain. This is the main form for the application. It allows the user to start the Intel RealSense camera streaming, stop streaming, and capture a snapshot. Once a snapshot has been taken, the user can perform various Enhanced Photography functions on the image.

Source Code

RSEnhancedPhotography.CS. This is a wrapper class that encapsulates the Intel RealSense camera Enhanced Photography functionality. Its purpose is to remove as much of the Intel RealSense camera functionality from the GUI layer as possible. It uses custom events to publish data back to the client application. It uses the RSEnhancedImageArg class to contain the new image that gets displayed.

RSPaintUtil.CS. This is a static utility class that assists in drawing mouse click points and lines onto the C# PictureBox controls.

RSPoints.CS. This helper class encapsulates PXCMPointI32 point objects and creates functionality to store points and validate point data as well as report point data to be displayed on the GUI.

RSStreaming.CS. This is a wrapper class that encapsulates Intel RealSense camera streaming. It streams data and publishes an event back out to the client. The event uses the RSSampleArg class to store data to be used by the client.

RSUtility.CS. This is a static class that contains source code; that is, as the name implies, utility. None of the functionality really belongs in any particular class.

Source\CustomEventArgs Code

RSEnhancedImageArg. Extends EventArgs by containing a PXCMImage object. This object will contain an image that has been manipulated by the Intel RealSense SDK Enhanced Photography functionality. This image is to be used to display on the individual WinForms PictureBox control.

RSMeasureArg.CS. Extends EventArgs by containing measure data returned from the Intel RealSense SDK Enhanced Photography functionality. This data is used on the WinForm “FormMeasure” to report measurement information back to the user.

RSSampleArg.CS. Extends EventArgs by containing a PXCMCapture.Sample object. This object contains the latest frame captured by the camera and is used for streaming data and displaying it on the WinForm FormMain.

In-Depth Understanding

Now I’m going to describe the underlying supporting classes that support the forms. I think it’s best to learn how the underlying code base works instead of focusing first on the forms. I’ll start with the first class, RSStreaming.

I won’t cover details such as the getter and setter functions, which are self-explanatory. Nor will I cover any function that is clearly obvious for other reasons.

RSStreaming

RSStreaming is a wrapper class around the Intel RealSense SDK streaming capabilities. This class isn’t overly complex. The intent is to show a simple example of how to stream data from the Intel RealSense camera. It has the ability to both stream and take an Enhanced Photo image and send both back to the client via events.

 

public event EventHandler<RSSampleArg> OnNewStreamingSample;

As mentioned previously in this document, I use Events to send data back to the client apps (Forms). RSStreaming sends data back to the client; in this case, FormMain via the event OnNewStreamingSample. It takes one parameter, RSSampleArg, which will contain the newest sample from the camera.

 

public bool Initialized

A simple getter flag that indicates whether the class has been initialized.

 

public bool IsStreaming

A simple getting flag that indicates whether the class is currently streaming data.

 

public void StartStreaming( )

A public function that a client uses to start the streaming from the camera. Ensures that the class has been properly initialized and if so calls the InitCamera() function to initialize the camera.

One key feature that I would like to mention is that I’m using a feature that does not get a lot of focus. As you have probably seen in a lot of samples for streaming, the sample shows a while loop and the AcquireAccess function with some type of mechanism to cancel streaming via a boolean flag. This sample uses a different approach.

My approach uses the PXCMSenseManager’s StreamFrames function, which causes the SenseManager to kick off its own internal thread and send data back via event handling. This is done by assigning the PXCMSenseManager.Handler object to a function. More on that later in the InitCamera( ) function.

 

private void InitCamera( )

InitCamera is a private function that initializes the camera and streaming. Starting out we create the PXCMSenseManager and PXCMSession objects. Next, we need the device information (Camera), which is gotten by making use of the RSUtility GetDeviceByType() static function passing in the session and the type of camera we want.

Then I create two PXCMVideoModule.DataDesc objects, one for color streaming and the other for depth streaming. From there I configure each stream. After the streams have been configured, I prompt the PXCMSenseManager to enable the streams.

As mentioned in the function StartStreaming(), I’m using an event-based approach to streaming and gathering data, which is done by creating and initializing a PXCMSenseManager.Handler event handler object and assigning it to the OnNewSample function. Every time the camera captures a new frame, the OnNewSample event handler is called.

Once this has all been accomplished, I initialize the SenseManager, sending it in the handler object and telling it to use this object and its event hander.

 

private pxcmStatus OnNewSample( int mid, PXCMCapture.Sample sample )

OnNewSample is the event handler for the PXCMSenseManager.Handler object.

 

Parameters

  • Mid. The stream identifier. If multiple streams are requested through the EnableVideoStream[s] function, this is PXCCapture.CUID+0, or PXCCapture.CUID+1....
  • PXCMCapture.Sample. The sample image that came from the camera.

When this function is called, I capture the image out of the Sample argument and put it into a new RSSampleArg object, then call the OnNewStreamingSample event for this class. This forces the event to notify the client FormMain that a new image is ready to be displayed.

Release the frame and then return the required pxcmStatus, which is not being used in this case.

 

public void StopStreaming( )

Stops the streaming by closing the streams and calling Dispose( ).

 

private void Dispose( )

Frees up resources for garbage collection.

 

RSEnhancedPhotography

The RSEnhancedPhotography class was created to wrap the Enhanced Photography functionality functionality into one easy-to-use class. It works on an event principle. Once an image has been processed, an event is raised returning the newly created image or measurement data back to the client app/class.

 

public RSEnhancedPhotography( PXCMPhoto photo )

Constructor initializes several of the global variables that are used in the class. The single input parameter is the original photo that was taken by the main form. It’s fully initialized with image data and used to initialize the local _colorPhoto object.

 

public void Dispose( )

Releases the memory to be garbage collected.

 

public void SendOriginalImage( )

Returns the original image back to the calling application by making use of the OnImageProcessed event.

 

public void MeasurePoints( RSPoints points )

MeasurePoints receives a populated RSPoints object. First I ensure that there are indeed two valid points in this object, the start and end points. Once this has been determined, a MeasureData object is created and sent into the PXCMEnhancedPhoto objects MeasureDistance function.

Next I take the data from the populated measureData object and populate the RSMeasureArg object. Notice the ( mesaureData.distance / 10 ) parameter, which converts to centimeters. Once the arg object has been populated, I send it back to the client via the OnImageMeasured event.

 

public void RefocusOnPoint( RSPoints point, float aperture = 50 )

With a camera, you set the aperture to get either a large or shallow depth of field. A small aperture creates a large depth of field; a wide-open aperture creates a shallow depth of field, which blurs items in front of or behind your subject.

RefocusOnPoint has the same effect. The function allows you to change your focal point in the image.

Of course, aperture settings don’t work in values of 0–100, but for the purposes of this example they do. If you want, please convert them to proper f-stops and send me the updated code.

RefocusOnPoint uses the PXCMEnhancedPhotos DepthRefocus function to create a new image with a new depth focus using the original color photo—the point where the user clicked on the screen and an aperture setting. Once we have the newly created PXCMPhoto I get the reference image out by calling the QueryReferenceImage() and then supplying the PXCMImage to the RSEnahncedImageArg instance. From there, you just need to pass it back to the client application via the OnImageProcessed event.

 

public void DepthEnhancement( PXCMEnhancedPhoto.DepthFillQuality quality )

Enhances/Changes the depth quality of an image between two values, either high or low. This is specified in the DepthFillQuality parameter.

First initialize the local PXCMPhoto image by calling the PXCMEnhancedPhoto’s EnhanceDepth function supplying the original PXCMPhoto and the quality specified.

Then to initialize the PXCMImage, I use the enhancedPhoto‘s QueryDepthImage to give the newly created depth image.

Once this has all been done, I create the new RSEnhancedImageArg to be sent back to the client via OnImageProcessed.

 

public void DepthResize( )

This shows a simplistic way to resize a depth image. In this case it resizes the depth image to be the same size as the color image specified in the original PXCMPhoto that was created in the constructor.

First I need the size information from the color photo. To do this, I query for the original PXCMPhoto object specified in the constructor. I then create the instance of the PhotoUtils object that contains the DepthResize function. After I get the size of the original image, I store the width and height in the required PXCMSizeI32 object.

From there it’s a simple process of telling the PXCMEnhancedPhoto to resize the depth image by calling the DepthResize function, specifying the PXCMPhoto and target size.

Once the resizing is done, it’s the same thing. Create the image by querying for enhancedPhoto, populating the RSEnhancedArg and sending it back to the client via OnImageProcessed.

 

public void PastOnPlane( RSPoints points )

This function shows how a user can take a PXCMPhoto and paste a new image onto a flat surface. When doing so, the image being pasted adapts to the environment. This means that if the image is pasted onto a wall, the image will look upright and have the same angle. If the image is pasted onto a desktop surface, the image will appear to lay down flat on the desk.

First, we need to ensure that there are two valid points, which the functionality requires.

Next, we load the image we want to paste by using the RSUtility’s LoadBitmap function.

The key object in working on this is the PXCMEnhancedPhoto.Paster class. This is a new class with the R5 release. Paster has a function PasteOnPlane that used to be in the PXCMEnhancedPhoto class, but was moved into this new class.

In this function I’m being cautious by looking to see if the return value from PastOnPlane is not null. This is because there is no guarantee that the PasteOnPlane function was able to successfully perform the operation. For example, if the surface between the two points is not flat, the function will not succeed. I’m simply ensuring that I don’t use a null object.

If we have a successful return value, I get the reference image, store it, pass it into the RSEnhancedImageArg object and post it back to the client application.

 

RSUtility

RSUtility is a static utility class that contains functionality that does not appropriately fit into any of the other classes.

public static PXCMCapture.DeviceInfo GetDeviceByType( PXCMSession session, PXCMCapture.DeviceModel deviceType )

This function is a helper function that is focused on getting the detailed information about a device—in this case, the R200 camera. This functionality has been seen in multiple RealSense examples.

First, we set up a filter by specifying that we are looking for a sensor as the main group, then for the subgroup we specify a video capture sensor.

Because multiple devices can be on a given system, we must iterate over all the possible devices. For each iteration, I populate the current PXCMSession.ImplDesc data in the currentGroup object. If there is no error we move onto the next step, which is to populate the PXCMCapture object.

Once we have the PXCMCapture object, I iterate over the various devices attached to this object. Check to see whether the current device information is for the camera we are looking for, and if it is, we break out of the loop. If not, we move onto the next device information attached to the PXCMCapture device until all devices have been checked.

Once the device has been found, we return it to the client, which in this case is the RSStreaming object.

 

public static Bitmap ToRGBBitmap( PXCMCapture.Sample sample )

A polymorphic function simply turns around and calls ToRGBBitmap( PXCMImage image ), passing it the sample argument’s image.

 

public static Bitmap ToRGBBitmap( PXCMImage image )

A simple wrapper function that uses the PXCMImage objects functionality to get bitmap data using a PXCMImage.ImageData object. Data is pulled out and stored into a .NET bitmap object and returned to the client.

 

public static PXCMImage LoadBitmap( PXCMSession session )

This function is used to support PasteOnPlanes. It loads a predetermined bitmap into a PXCMImage object and returns it to the client.

First it gets the path to the file and ensures the file exists. If not, it returns null. If the file exists it creates a new .NET bitmap object by loading it from a file.

A PXCMImage.ImageInfo object is used to hold basic information about the bitmap. This is used when creating the PXCMImage object. They are the specifications for the image we will create.

Next we need a PXCMImage.ImageData object, which contains and holds the actual bitmap data. A .NET BitmapData object is created and initialized with data that describes the format and structure of the data we need.

To fully understand what Bitmap.Lockbits is doing, refer to https://msdn.microsoft.com/en-us/library/5ey6h79d(v=vs.110).aspx.

The PXCMImage object releases access to the image data to free the memory used, the bitmap unlocks its bits, and the PXCMImage is returned to the client.

 

public static int GetDepthAtClickPoint( PXCMPhoto photo, PXCMPointI32 point )

This function receives a PXCMPhoto and a PXCMPointI32.

 

public static bool SavePhoto( PXCMSession session, PXCMCapture.Sample sample )

Saves the photo to the disk. Uses the session to create a new PXCMPhoto object that has the functionality to save to disk. The PXCMPhoto object uses its import from the preview sample to import the image data into itself. Find out whether the file already exists; if so delete it and save the file.

 

public static PXCMPhoto OpenPhoto( PXCMSession session )

Straightforward function, ensures that the XDM photo exists on the hard drive, if so, uses the PXCMSession object to create the PXCMPhoto. The photo object then loads the XDM file. The function then returns the PXCMPhoto back to the client.

 

RSPaintUtil

The RSPaintUtil class is a utility class to encapsulate the drawing of points and lines onto the picture boxes photos. Also, it reduces code duplication between different forms that rely on this functionality.

It ensures that there is a valid start point. If there is, create a new .NET Point object specifying the start point’s x,y values.

Calls the draw circle function to draw a circle around the point that was clicked.

 

static public void DrawTwoPointsAndLine( RSPoints points, PaintEventArgs e )

Ensures that both points are valid and creates two new .NET Point objects. The function draws circles at those points by calling DrawCircle for each. Then a line is drawn between them via DrawLine.

 

static private void DrawCircle( Point p, PaintEventArgs e )

This function draws a circle around the x,y coordinates of the .net Point object. This is done by creating a new Pen object. A rectangle is needed to draw any circle. This is what defines the size of the circle to be drawn. I created a utility bounding rectangle function to build this. Once the rectangle has been created, I use the paint event args DrawEllipse function to draw the circle the size of the rectangle.

 

static private void DrawLine( Point pointA, Point pointB, PaintEventArgs e )

As with the circle, we need to create a .NET Pen object, tell it the mode to use, and draw the line using the event args DrawLine function between the start point and end point.

 

static public Rectangle BuildBoundingRectangle( Point p )

Builds a .NET rectangle object centered around the x,y values in p. This is done by creating a new Rectangle object. I wanted the bounding rectangle to be 10px by 10px. 10x10 was just an arbitrary value I selected.

 

RSPoints

RSPoints is a simple wrapper for managing two different possible points. The points represent where a user clicked a given PXCMPhoto being shown in a .NET PictureBox control.

It uses two PXCMPointI32 objects which represent a start point and an end point. In some situations a RSPoints instance will only need the start point, which would be the case for functionality such as RefocusOnPoint. In other cases, two valid points are needed for functionality such as MeasurePoints, which needs both start and end points.

The class operates in two modes: single point mode, meaning we are doing operations that only require one valid point, or multi-point mode, which requires both start and end points to be valid for things like MeasurePoints.

 

public RSPoints( )

Constructor puts point mode into single, then calls ResetPoints to set all x,y values to 0.

 

public void AddPoint( int x, int y )

This would be more akin to adding an object to an array or list. However as you can see there is no array or list. Just two points. But, I wanted to give this class a list-type feel from the outside in case I decided to add an array to contain points later. Was this class well thought out? Probably not, but at this point, I’m not too worried about it, since it’s just a supporting class at this time and does what I need it to.

If we are in single point mode, AddPoints will always replace the start point. This is done via the ResetPoints object.

If the mode is single point mode, I don’t necessarily want to add more points, rather just clear out the existing start point and set a new start point.

 

RSSampleArg

RSSAmpleArg inherits from EventArgs. The intent is that this class be used with RSStreaming. When RSStreaming is streaming, an instance of this class will be created on every new frame and populated with the data from the camera, then sent back to the client via an event class.

 

public RSSampleArg( PXCMCapture.Sample sample )

The constructor initializes the class’s local PXCMCapture.Sample _sample object with the parameter.

 

public PXCMCapture.Sample Sample

Simple getter returns the local PXCMCapture.Sample object.

 

RSEnhancedImageArg

RSEnhancedImageArg inherits from EventArgs. The intent of this class is that it be used with RSEnhancedPhotography. When the RSEnhancedPhotography has completed modifying an image, an instance of RSEnhancedImageArg is created, populated with the newly created image, and sent back to the client by the usage of an Event class that the client subscribes to.

 

public RSEnhancedImageArg( PXCMImage image )

The constructor initializes the only variable in the class, the PXCMImage object.

 

public PXCMImage EnhancedImage

A simple getter returns the PXCMImage instance.

 

RSMeasureArg

RSMeasureArg inherits from EventArgs. This event is used inside the RSEnhancedPhotography classes MeasurePoints function. When the MeasurePoints function has calculated the distance, an instance of this class is used to contain the distance, confidence, and precision data returned. Once this object has been populated, it is used with an Event object to send the data back to the client application.

 

public RSMeasureArg(float distance, float confidence, float precision)

Parameters

  • Float. Measurement distance between two points.
  • Float. The level of confidence the SDK has regarding the measurement.
  • Float. The precision level the SDK used.

Constructor populates the local data with the input parameters.

 

public float Distance

Simple getter returns the distance between the two points.

 

public float Confidence

Simple getter returns the confidence level calculated when the SDK processed the distance between the two points.

 

public float Precision

Simple getter returns the precision level calculated when the SDK processed the distance between the two points.

 

FormMain

FormMain is the main form (entry point) for the application. It allows the user to control when to stream, stop streaming, and take a snapshot to be used by the other forms for the various Enhanced Photography functionality.

This form uses the RSStreamingRGB object to stream data from the camera. The data is then rendered to a .NET PictureBox control with the help of an Intel RealSense SDK utility object named D2D1Render.

The form gets its updates from RSStreamingRGB via subscribing to RSStreaming’s OnNewStreamingSample event.

 

public MainForm( )

This is the forms constructor, so to speak. It initializes the forms global objects, sets up the event handler for the samples as they come in from RSStreamingRGB, and sets the buttons to the proper state.

 

private void rsRGBStream_NewSample( object sender, RSSampleArg sampleArg )

The event handler for the RSStreamingRGB object. Checks to see if we want to view the color image or RGB image and updates the _render utility object with the proper sample.

It then checks to see whether a new snapshot needs to be taken, and if so, uses the RSUtility to save the photo to disk.

 

private void btnStream_Click( object sender, EventArgs e )

The event handler for when the Stream button is clicked. Instructs the RSStreamingRGB object to start streaming, and sets the buttons according to the state of the app.

 

private void btnStopStream_Click( object sender, EventArgs e )

The stop streaming button event handler calls the StopStreaming function.

 

private void btnTakeDepthPhoto_Click( object sender, EventArgs e )

Event handler for the snapshot button. Sets the flag to take a snapshot to true so that in the streaming event hander, it will know to save the current image data to disk, then instruct the form to stop streaming.

 

private void EnableEnhancedPhotoButtons( bool enable )

Sets the buttons according to the state of the application via the enable value.

You might look at this function and wonder what’s going on, because you don’t any see multithreading going on. Nothing spun up a thread. So, why am I using multithreading syntax to update the buttons? The reason is that even though the form is not launching any threads, nor is RSStreamingRGB, there IS a separate threading executing.

Inside RSStreaming’s StartStreaming function, you will see a line of code:

_senseManager.StreamFrames( false );

Behind the scene, this line of code (embedded inside the Intel RealSense SDK) is actually spawning a new thread. And because of this, we need to wrap up this functionality in multithreaded syntax.

To start out, I have to check whether the group box surrounding the various enhancement buttons requires an invoke, which is required in dealing with Windows controls in multithreaded applications. If it’s required, then you have to create a new delegate, in this case I’m creating an instance of the EnableEnhancedButtons delegate that was created at the top of the source code file. When creating an instance of a delegate, you must supply the name of the function you want to call, which in this case is the same function EnableEnhancedPhotobuttons. After the delegate has been created, we tell the form to invoke it, sending in the original Boolean value.

When the delegate calls the function, this time the function will not pass the InvokeRequired test and fall into the else statement enabling the group box surrounding the enhancement buttons. The group box will either be enabled or disabled depending on the value in Boolean variable enable.

 

private void EnableStreamButtons( bool enable )

This works exactly the same as EnableEnhancedPhotoButtons does with the exception of it working with different controls to enable and/or disable.

 

private void StopStreaming( )

Checks to ensure that the RSStreaming object has been properly initialized, and if so, calls its stop streaming function. This kills the streaming from the camera and closes the thread that was generated.

Then I set the buttons accordingly.

 

private void btnExit_Click( object sender, EventArgs e )

Click event handler for the exit button. Calls the form Close() function.

 

private void Form1_FormClosing(object sender, FormClosingEventArgs e)

Handles the forms FormClosing event. Checks to see whether the RSStreaming object was properly initialized, and if so, forces it to stop streaming.

 

private void btnDepthResize_Click( object sender, EventArgs e )
private void btnDepthEnhance_Click( object sender, EventArgs e )
private void btnRefocus_Click( object sender, EventArgs e )
private void btnPasteOnPlane_Click( object sender, EventArgs e )
private void btnBlending_Click( object sender, EventArgs e )

I grouped all these functions together in one explanation. They all do the exact same thing with the exception that each is initializing a different form. In each I am creating a new session object, which is required by the RSUtility.OpenPhoto function. The OpenPhoto function opens the image that was created and saved to disk. Once the photo has been retrieved, I create a new form object passing it the photo then showing the form. Pretty straightforward stuff.

 

private void btn_MouseEnter( object sender, EventArgs e )

Event hander when the mouse rolls over either the Start Streaming, Take Photo, or Stop Streaming buttons. I get the button that was rolled over, then look at its “Tag” field. After determining which button was hovered over, I set the toolstrips text value to indicate what each button does.

 

private void btn_MouseLeave( object sender, EventArgs e )

Sets the toolstrips text back to an empty string.

 

FormMeasure

FormMeasure demonstrates how to use the Enhanced Photography’s measuring capabilities. It utilizes an RSEnhancedPhotography instance to talk to the Intel RealSense SDK Enhanced Photography functionality.

 

public FormMeasure( PXCMPhoto photo )

The forms constructor. Accepts a PXCMPhoto object that contains the image to be measured. The constructor initializes any and all global variables included in the class, and registers an OnImageMeasured event handler.

 

private void InitStatStrip( )

Creates default entries for the status strip. Sets values to empty strings and/or 0,0 for x,y positions.

 

private void rsEnhanced_OnImageMeasured(object sender, RSMeasureArg e)

The event handler for the OnImageMeasured event on the RSEnhancedPhotography object. Accepts the RSMeasuredArg, which contains the information about the measurement. Updates the status strips text values.

 

private void pictureBox1_MouseClick( object sender, MouseEventArgs e )

Event handler when a mouse is clicked on the picture box. The event handler starts off by trying to determine how many mouse clicks have been selected. If there is no start point, we know that this is the first time a user has clicked the picturebox/photo. If this is the case then add the click point. If there is a valid start point, then we need to verify the existence of the end point. If it does not exist, add it.

Once we have determined the start and end points, I update the points in the status strip by calling UpdatePointsStatus.

Once we have to valid points; start point, end point, we can then call into the RSEnhancedPhoto’s MeasurePoints(…) function passing in the points object. We don’t need these points anymore so I clear them out by calling the RSPoint object’s ResetPoints() function.

After that, I call the pictureboxe’s invalidate to draw the points on the screen. Keep in mind this function will be called regardless of taking a measurement. This allows us to always see where the picture was clicked on.

 

private void UpdatePointStatus( )

Updates the status strips two point text values, which shows the x,y on the image where it was clicked.

 

private void pictureBox1_MouseMove( object sender, MouseEventArgs e )

Mouse move event handler, simply tracks where the mouse is at while moving around the image. Updates the status strips mouse x,y values.

 

private void pictureBox1_Paint( object sender, PaintEventArgs e )

The picture boxes paint event handler. Responsible for drawing the two points and the line between them onto the image.

Checks to see if we have two valid points and if so, draws both points and the line. If there is only the start point, it only draws the start point marker onto the image.

 

private void Cleanup( )

Cleans up resources.

 

private void btnExit_Click( object sender, EventArgs e )

The exit buttons click event handler forces the form to close.

 

private void FormMeasure_FormClosing( object sender, FormClosingEventArgs e )

The forms closing event handler. Calls cleanup to release resources.

 

FormDepthEnhance

This form shows the how a depth image can be enhanced and displays the newly enhanced image in the picturebox control. It has the ability to show either low quality or high quality.

 

public FormDepthEnhance( PXCMPhoto photo )

The forms constructor initializes the variables used by the class sets up the picturebox control with the initial depth image, which defaults to LOW quality.

 

private void rdo_Click( object sender, System.EventArgs e )

Event handler when one of the two radio buttons is selected. Calls the RSEnhancedPhoto’s DepthEnhancement(…) function, passing in the value indicating high or low quality.

 

private void _rsEnhanced_OnImageProcessed( object sender, RSEnhancedImageArg e )

The OnImageProcessed event hander when RSEnhancedPhotography sends out the event. Updates the picturebox via the renderer helper object.

 

private void btnExit_Click( object sender, System.EventArgs e )

Exit button click event handler forces the form to close.

 

private void Cleanup( )

Nullify the objects to allow for garbage collection.

 

private void FormDepthScaling_FormClosing( object sender, FormClosingEventArgs e )

Form closing event handler. Forces cleanup of the variables.

 

FormDepthResize

This form is used to show how a depth image can be resized. In this case, it gets resized to the size of the RGB image from the PXCMPhoto object.

 

public FormDepthResize( PXCMPhoto photo )

The forms constructor. Just as with the other form constructors, initializes the variables used by the form and sets the OnImageProcessed event handler.

 

private void btnResize_Click( object sender, System.EventArgs e )

Button resize event handler calls the RSEnahancedPhoto’s DepthResize(…) function.

 

private void _rsEnhanced_OnImageProcessed( object sender, RSEnhancedImageArg e )

The OnImageProcessed event handler, resizes the picturebox and updates the image displayed via the renderer utility object.

 

private void btnExit_Click( object sender, System.EventArgs e )

Exit button click event handler. Forces the form to close.

 

private void Cleanup( )

Nullifies the objects so they can be garbage collected.

 

private void FormDepthResize_FormClosing( object sender, FormClosingEventArgs e )

Form closing event handler simply ensures that Cleanup is called.

 

FormDepthRefocus

FormDepthRefocus demonstrates the refocusing capabilities of the Intel RealSense SDK Enhanced Photography functionality. When a user clicks the image, the focal point around that spot is manipulated bringing it into focus while putting everything else out of focus.

Along with clicking a spot on the image to focus on, there is a slider that can be used to simulate changing a camera’s aperture.

 

public FormDepthRefocus( PXCMPhoto photo )

The forms constructor initializes the variables used by the form. Registers the enhanced photography objects OnImageProcessed event to a function and sets up the picturebox control.

 

private void pictureBox1_MouseClick( object sender, MouseEventArgs e )

Click event handler for the picturebox. Sets the points objects start point to the cooridnates of the mouse click, calls the RSEnhancedPhotography’s RefocusOnPoint(…). The RSPoints ResPoints() removes the start point values, essentially resetting the object.

 

private void tbAperture_Scroll( object sender, System.EventArgs e )

The scroll controls scroll event handler. Gets the value of the scroll, sets the forms text value representing the scroll value, and then calls the RSEnhancedPhotography’s RefocusOnPoint this time sending in a second parameter, the aperture size.

 

private void _rsEnhanced_OnImageProcessed( object sender, RSEnhancedImageArg e )

The event handler for the RSEnhancedObjects OnImageProcessed event. Uses the renderer utility class to update the picturebox control.

 

private void btnExit_Click( object sender, System.EventArgs e )

Exit button click event handler. Forces the form to close.

 

private void Cleanup( )

Nullifies the objects so they can be garbage collected.

 

private void FormDepthResize_FormClosing( object sender, FormClosingEventArgs e )

Form closing event handler simply ensures that Cleanup is called.

 

FormDepthPasteOnPlane

This form demonstrates how a user can click a flat surface (plane) that is present in the image and force a second image to be rendered onto that surface. The user click two points on a flat surface and if the Intel RealSense SDK functionality can determine the surface, the external image will be pasted onto that flat surface.

 

public FormDepthPasteOnPlane( PXCMPhoto photo )

The forms constructor initializes the variables used by the form. Registers the enhanced photography objects OnImageProcessed event to a function and sets up the picturebox control.

 

private void _rsEnhanced_OnImageProcessed( object sender, RSEnhancedImageArg e )

The event handler for the RSEnhancedObjects OnImageProcessed event. Uses the renderer utility class to update the picturebox control.

 

private void pictureBox1_MouseClick( object sender, MouseEventArgs e )

Event handler when a mouse is clicked on the picture box. The event handler starts by trying to determine how many mouse clicks have been selected. If there is no start point, we know that this is the first time a user has clicked the picturebox/photo. If this is the case, add the click point. After adding the point, I need to invalidate the picturebox control so that the click point can be drawn onto the image.

If there is a valid start point, we need to verify the existence of the end point. If it does not exist, add it.

Once we have the valid points—start point, end point—we can then call into the RSEnhancedPhoto’s MeasurePasteOnPlane(…) function passing in the points object. We don’t need these points anymore so I clear them out by calling the RSPoint object’s ResetPoints() function.

 

private void btnExit_Click( object sender, System.EventArgs e )

Exit button click event handler. Forces the form to close.

 

private void Cleanup( )

Nullifies the objects so they can be garbage collected.

 

private void FormDepthPasteOnPlane_FormClosing( object sender, FormClosingEventArgs e )

Form closing event handler ensures that Cleanup is called.

 

FormDepthBlending

FormDepthBlending is similar to paste on planes. However the difference is that once an image has been embedded, that image can be manipulated by changing the Yaw, Pitch, Roll, Zoffset, and Scale, which are all indicated by the sliders present on the form.

 

public FormDepthBlending( PXCMPhoto photo )

The forms constructor initializes the variables used by the form. Registers the enhanced photography objects OnImageProcessed event to a function and sets up the picturebox control.

 

private void pictureBox1_MouseClick( object sender, MouseEventArgs e )

Event handler for the mouse clicks on the picturebox. Ensures that there is only ever one point in the points object that is the StartPoint. It’s the only point that’s needed by this functionality. I capture the points, and then call the Blend() to do the actual blending of the external image into the PXCMPhoto.

 

private void Blend( )

Gets the values from each slider, creates the rotation array needed, and then calls the RSEnhancedPhotograpy’s DepthBlend function passing in all the parameters.

 

private void tbBlend_Scroll( object sender, EventArgs e )

Event handler for any and all of the scrollers. Turns around and calls Blend().

 

private void _rsEnhanced_OnImageProcessed( object sender, RSEnhancedImageArg e )

The event handler for the RSEnhancedObjects OnImageProcessed event. Simply uses the renderer utility class to update the picturebox control.

 

private void btnExit_Click( object sender, System.EventArgs e )

Exit button click event handler. Forces the form to close.

 

private void Cleanup( )

Nullifies the objects so they can be garbage collected.

 

private void FormDepthBlending_FormClosing( object sender, FormClosingEventArgs e )

Form closing event handler ensures that Cleanup is called.

Conclusion

In this document and sample application, I’ve shown you how to perform some of the enhanced photography functionality that the SDK and camera provide. I hope you’ve enjoyed this article and find it helpful.  Thanks for reading.

About the Author

Rick Blacker is a seasoned software engineer who spent many of his years authoring solutions for database driven applications. Rick has recently moved to the Intel RealSense technology team and helps users understand the technology.

Getting to Know the Arduino 101 Platform

$
0
0

Introduction

Depending on the requirements of the project, as an Internet of Thing (IoT) developer you need to choose the best platform to build your application. It is important to understand the capabilities of the different platforms. The first part of this article compares the Arduino 101* platform to the Arduino UNO*, giving a baseline for those who aren’t familiar with the Arduino 101 features.  The rest of the article dives deeper into the capabilities of the Arduino 101 platform.  

Arduino 101 and Arduino UNO Side by Side

Arduino UNO uses an ATmega328P module while Arduino 101 uses a low power Intel® Curie™ module powered by the Intel® Quark SE SoC. The UNO runs on 5V. The Arduino 101 runs on 3.3 V, although it is 5V tolerant. The Arduino 101 has added on-board Bluetooth Low Energy and a 6-axis combo sensor with accelerometer and gyroscope while the UNO does not.  They are identical in size and pinout, see Figures 1 and 2 below.

Arduino 101 Platform
Figure 1: Arduino 101 Platform.

Arduino UNO Platform
Figure 2: Arduino UNO Platform

Here is the summary of Arduino 101 and Arduino UNO platform features.

Product HighlightsArduino 101Arduino UNO
MicrocontrollerIntel CurieATmega328P
Operating Voltage3.3 V (5V tolerant I/O)5 V
CPU Speed32 MHz16 MHz crystal oscillator
Architecture32-bit Intel Quark SE SoC8-bit
Flash memory196 KB32 KB
SRAM24 KB2 KB
EEPROM1 KB1 KB
OSOpen source RTOSNA
Clock Speed32 MHz16 MHz
FeaturesIntegrated Digital SignalUsed as DSP
 Processor (DSP) sensor hub with 6-axis combo sensor with accelerometer and gyroscope 
BluetoothBluetooth Low EnergyNA
Digital I/O pins14 digital input/output pins14 digital input/output pins
Analog I/O pins6 analog inputs pins6 analog inputs pins
USB connectorA USB connector for serial communication and sketch uploadA USB connector for serial communication and sketch upload
ICSP header with SPI signalAn In-Circuit Serial Programming header with SPI signalsAn In-Circuit Serial Programming header with SPI signals
I2CI2C dedicated pinsI2C dedicated pins (Arduino UNO rev3)
ResetA reset buttonA reset button
Dimensions
(Length x Width)
68.6mm x 53.4mm68.6mm x 53.4mm

Arduino 101 Detailed Breakdown

Processors

The Quark SE contains a single core 32MHz x86(Quark) processor and the 32MHz Argonaut RISC Core (ARC)® EM processor. The two processors operate simultaneously and share memory.  The ARC processor is also referenced as the Digital Signal Processor (DSP) sensor hub depending on what document you’re looking at. In theory, the DSP can run using a minimal amount of power gathering and processing sensor data while the x86 processor waits in a low power mode, which would be ideal for always-on applications. This ability isn’t available in software at this time however.

When you load an Arduino sketch, it runs on the ARC. However, the Intel toolchain compiles your sketch so that the ARC interacts with the x86 processor as needed via static mailboxes. If you’re really interested in experimenting with that you can access the open source corelibs for the Arduino 101 on 01.org’s GitHub*.

Real-time Operating System (RTOS)

The standout capability for the Arduino 101 from a software standpoint is the ability to run an RTOS. Intel will be releasing a Software Development Kit (SDK) that will include a set of software development tools, libraries, documentation and sample code to enable developers to create IoT applications using the Intel Curie module. The SDK based on the Zephyr Project* and will be compatible with the Arduino 101 platform. The SDK will be available for public in the coming months. Sign up to receive more information at https://software.intel.com/en-us/iot/hardware/curie.

The Zephyr Project is a small open source RTOS for the IoT. It offers connectivity protocols optimized for low-powered, small memory footprint devices and supports Bluetooth, Bluetooth LE, Wifi, and more. It keeps low memory usage and prioritizes task execution, the RTOS provides the most efficient use of energy. The RTOS includes powerful developer tools and robust hardware features. The developer tools include custom toolchain and complier optimizations. For more information on the Zephyr project and the supported hardware features, see zephyrproject.org.

Bluetooth Low Energy (Bluetooth LE* or Bluetooth Smart*)

Arduino 101 adds on-board Bluetooth LE to enable the Arduino 101 to communicate and interact directly with several devices such as computers, smartphones and tablets without using a Bluetooth LE shield. With Bluetooth enabled, the Arduino 101 can directly communicate with other devices without additional hardware. Bluetooth LE is ideal for low power consumption applications. The Arduino sample code for CurieBLE is available at https://www.arduino.cc/en/Reference/CurieBLE.

Additional Libraries

Libraries are a collection of code that provide extra functionality for use in sketches. Arduino 101 libraries makes it easy to connect to Bluetooth LE, sensors and timers. To get started using the built-in Arduino 101 libraries, follow https://www.arduino.cc/en/Guide/Libraries.

  • CurieBLE: Connect computers, smartphones, tablets with Bluetooth LE module
  • CurieIMU: Use the on-board 6-axis accelerometer and gyroscope
  • CurieTimerOne: Manage Timer functions

Accelerometer and Gyroscope

The Accelerometer and Gyroscope are the on-board sensors of the Arduino 101 platform. Accelerometers are used mainly to measure acceleration and tilt. Gyroscopes are used to measure angular velocity and orientation. These sensors provide the ability to precisely identify the orientation and movement of the object. This new feature allows the Arduino 101 platform to enable a better user experience for wearable devices.

One of the ways to use the accelerometer is counting steps, like a pedometer.  When the Arduino 101 platform makes a step motion, the step is detected. The step motion is detected when there is a significant change in the X, Y. and Z axes’ velocity relative to the resting state. For more information about the step counter, visit https://www.arduino.cc/en/Tutorial/Genuino101CurieIMUStepCounter.

Similarly to the Arduino UNO, Arduino 101 can be programmed with the Arduino IDE software. To start using the Arduino 101, go to https://software.intel.com/en-us/articles/fun-with-the-arduino-101-genuino-101. To see how the step counting works, upload the step counting sketch into Arduino 101 as below.

Loading step
Figure 3: Loading step counting sketch using Arduino IDE.

Upload the step counting sketch:

Running step
Figure 4: Running step counting sketch on Arduino IDE.

Move the Arduino 101 platform to make steps and view the serial monitor:

Serial window
Figure 5: Serial window.

Interrupt Pins

Both Arduino 101 and Arduino UNO have 20 I/O pins. Arduino 101 has more pins that can accept interrupts than Arduino UNO. Arduino UNO can trigger an interrupt on digital pins 2 and 3 while Arduino 101 can trigger interrupts on all pins. External interrupts that are triggered by external events can happen on all pins. Low value, high value, a rising or falling edge can trigger an interrupt on all pins but change value only supported by pins 2, 5, 7, 8, 10, 11, 12, and 13.

Summary

This document summarized the features of the Arduino 101. There are sensors, shields, components, and libraries that make the Arduino 101 platform more powerful. Order the Arduino 101 platform at http://www.intel.com/buy/us/en/product/emergingtechnologies/intel-arduino-101-497161 and check out https://software.intel.com/en-us/articles/fun-with-the-arduino-101-genuino-101 to experiment and enjoy the power of the Intel® Curie module.

Helpful References

About the Author

Nancy Le is a software engineer at Intel Corporation in the Software and Services Group working on Intel® Atom™ processor scale-enabling projects.

Intel® MPI Library 5.1 Update 3 Fixes List

$
0
0

NOTE: Defects and feature requests described below represent specific issues with specific test cases. It is difficult to succinctly describe an issue and how it impacted the specific test case. Some of the issues listed may impact multiple architectures, operating systems, and/or languages. If you have any questions about the issues discussed in this report, please post on the user forums, http://software.intel.com/en-us/forums or submit an issue to Intel® Premier Support, https://premier.intel.com.

 

Tracking #DescriptionFix
DPD200259615Documentation does not fully describe environment variables which cannot be used on command line (-env/-genv).Added documentation to note if an environment variable cannot be used on the command line.
DPD200278327Using mpiexec.hydra with bootstrap LSF executes blaunch for every node instead of one group call.Updated mpiexec.hydra to use blaunch -z once to launch on all nodes.
DPD200376416Intel® MPI Library runtime error where MPI_Bcast fails with messages larger than 2 GB.Updated algorithms to handle larger messages.
DPD200377423MPI Binding Kit for Fortran 90 modules exhibited naming conflict with delivered modules.Updated naming scheme of MPI Binding Kit modules to be unique.
DPD200378910Incorrect pinning on high numbered cores using hexadecimal notation pinning mask.Corrected pinning behavior.
DPD200379012Update name in documentation to Univa Grid Engine*.Name updated in documentation.
DPD200379714End User License Agreement not shown in Japanese Intel® MPI Library installer.Updated installer to display End User License Agreement.
DPD200379715Japanese Intel® MPI Library installer cannot find Getting Started Guide.There is no Japanese Getting Started Guide, installer updated to point to English version.
DPD200575666Variable $username_ not populated in SGE.Updated mpirun script to correctly handle environment variable.
DPD200576363Intel® MPI Library freezes during startup with LSF 9.1.3 using blaunch on 512000 or more cores. 

 

Intel® XDK FAQs - General

$
0
0

How can I get started with Intel XDK?

There are plenty of videos and articles that you can go through here to get started. You could also start with some of our demo apps. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

Having prior understanding of how to program using HTML, CSS and JavaScript* is crucial to using the Intel XDK. The Intel XDK is primarily a tool for visualizing, debugging and building an app package for distribution.

You can do the following to access our demo apps:

  • Select Project tab
  • Select "Start a New Project"
  • Select "Samples and Demos"
  • Create a new project from a demo

If you have specific questions following that, please post it to our forums.

Can I use an external editor for development in Intel® XDK?

Yes, you can open your files and edit them in your favorite editor. However, note that you must use Brackets* to use the "Live Layout Editing" feature. Also, if you are using App Designer (the UI layout tool in Intel XDK) it will make many automatic changes to your index.html file, so it is best not to edit that file externally at the same time you have App Designer open.

Some popular editors among our users include:

  • Sublime Text* (Refer to this article for information on the Intel XDK plugin for Sublime Text*)
  • Notepad++* for a lighweight editor
  • Jetbrains* editors (Webstorm*)
  • Vim* the editor

How do I get code refactoring capability in Brackets* (the Intel XDK code editor)?

...to be written...

Why doesn’t my app show up in Google* play for tablets?

...to be written...

What is the global-settings.xdk file and how do I locate it?

global-settings.xdk contains information about all your projects in the Intel XDK, along with many of the settings related to panels under each tab (Emulate, Debug etc). For example, you can set the emulator to auto-refresh or no-auto-refresh. Modify this file at your own risk and always keep a backup of the original!

You can locate global-settings.xdk here:

  • Mac OS X*
    ~/Library/Application Support/XDK/global-settings.xdk
  • Microsoft Windows*
    %LocalAppData%\XDK
  • Linux*
    ~/.config/XDK/global-settings.xdk

If you are having trouble locating this file, you can search for it on your system using something like the following:

  • Windows:
    > cd /
    > dir /s global-settings.xdk
  • Mac and Linux:
    $ sudo find / -name global-settings.xdk

When do I use the intelxdk.js, xhr.js and cordova.js libraries?

The intelxdk.js and xhr.js libraries were only required for use with the Intel XDK legacy build tiles (which have been retired). The cordova.js library is needed for all Cordova builds. When building with the Cordova tiles, any references to intelxdk.js and xhr.js libraries in your index.html file are ignored.

How do I get my Android (and Crosswalk) keystore file?

New with release 3088 of the Intel XDK, you may now download your build certificates (aka keystore) using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Convert a Legacy Android Certificate" in that document, for details regarding how to do this.

It may also help to review this short, quick overview video (there is no audio) that shows how you convert your existing "legacy" certificates to the "new" format that allows you to directly manage your certificates using the certificate management tool that is built into the Intel XDK. This conversion process is done only once.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I rename my project that is a duplicate of an existing project?

See this FAQ: How do I make a copy of an existing Intel XDK project?

How do I recover when the Intel XDK hangs or won't start?

  • If you are running Intel XDK on Windows* it must be Windows* 7 or higher. It will not run reliably on earlier versions.
  • Delete the "project-name.xdk" file from the project directory that Intel XDK is trying to open when it starts (it will try to open the project that was open during your last session), then try starting Intel XDK. You will have to "import" your project into Intel XDK again. Importing merely creates the "project-name.xdk" file in your project directory and adds that project to the "global-settings.xdk" file.
  • Rename the project directory Intel XDK is trying to open when it starts. Create a new project based on one of the demo apps. Test Intel XDK using that demo app. If everything works, restart Intel XDK and try it again. If it still works, rename your problem project folder back to its original name and open Intel XDK again (it should now open the sample project you previously opened). You may have to re-select your problem project (Intel XDK should have forgotten that project during the previous session).
  • Clear Intel XDK's program cache directories and files.

    On a Windows machine this can be done using the following on a standard command prompt (administrator is not required):

    > cd %AppData%\..\Local\XDK
    > del *.* /s/q

    To locate the "XDK cache" directory on [OS X*] and [Linux*] systems, do the following:

    $ sudo find / -name global-settings.xdk
    $ cd <dir found above>
    $ sudo rm -rf *

    You might want to save a copy of the "global-settings.xdk" file before you delete that cache directory and copy it back before you restart Intel XDK. Doing so will save you the effort of rebuilding your list of projects. Please refer to this question for information on how to locate the global-settings.xdk file.
  • If you save the "global-settings.xdk" file and restored it in the step above and you're still having hang troubles, try deleting the directories and files above, along with the "global-settings.xdk" file and try it again.
  • Do not store your project directories on a network share (Intel XDK currently has issues with network shares that have not yet been resolved). This includes folders shared between a Virtual machine (VM) guest and its host machine (for example, if you are running Windows* in a VM running on a Mac* host). This network share issue is a known issue with a fix request in place.
  • There have also been issues with running behind a corporate network proxy or firewall. To check them try running Intel XDK from your home network where, presumably, you have a simple NAT router and no proxy or firewall. If things work correctly there then your corporate firewall or proxy may be the source of the problem.
  • Issues with Intel XDK account logins can also cause Intel XDK to hang. To confirm that your login is working correctly, go to the Intel XDK App Center and confirm that you can login with your Intel XDK account. While you are there you might also try deleting the offending project(s) from the App Center.

If you can reliably reproduce the problem, please send us a copy of the "xdk.log" file that is stored in the same directory as the "global-settings.xdk" file to html5tools@intel.com.

Is Intel XDK an open source project? How can I contribute to the Intel XDK community?

No, It is not an open source project. However, it utilizes many open source components that are then assembled into Intel XDK. While you cannot contribute directly to the Intel XDK integration effort, you can contribute to the many open source components that make up Intel XDK.

The following open source components are the major elements that are being used by Intel XDK:

  • Node-Webkit
  • Chromium
  • Ripple* emulator
  • Brackets* editor
  • Weinre* remote debugger
  • Crosswalk*
  • Cordova*
  • App Framework*

How do I configure Intel XDK to use 9 patch png for Android* apps splash screen?

Intel XDK does support the use of 9 patch png for Android* apps splash screen. You can read up more at https://software.intel.com/en-us/xdk/articles/android-splash-screens-using-nine-patch-png on how to create a 9 patch png image and link to an Intel XDK sample using 9 patch png images.

How do I stop AVG from popping up the "General Behavioral Detection" window when Intel XDK is launched?

You can try adding nw.exe as the app that needs an exception in AVG.

What do I specify for "App ID" in Intel XDK under Build Settings?

Your app ID uniquely identifies your app. For example, it can be used to identify your app within Apple’s application services allowing you to use things like in-app purchasing and push notifications.

Here are some useful articles on how to create an App ID:

Is it possible to modify the Android Manifest or iOS plist file with the Intel XDK?

You cannot modify the AndroidManifest.xml file directly with our build system, as it only exists in the cloud. However, you may do so by creating a dummy plugin that only contains a plugin.xml file containing directives that can be used to add lines to the AndroidManifest.xml file during the build process. In essence, you add lines to the AndroidManifest.xml file via a local plugin.xml file. Here is an example of a plugin that does just that:

<?xml version="1.0" encoding="UTF-8"?><plugin xmlns="http://apache.org/cordova/ns/plugins/1.0" id="my-custom-intents-plugin" version="1.0.0"><name>My Custom Intents Plugin</name><description>Add Intents to the AndroidManifest.xml</description><license>MIT</license><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- android --><platform name="android"><config-file target="AndroidManifest.xml" parent="/manifest/application"><activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/app_name" android:launchMode="singleTop" android:name="testa" android:theme="@android:style/Theme.Black.NoTitleBar"><intent-filter><action android:name="android.intent.action.SEND" /><category android:name="android.intent.category.DEFAULT" /><data android:mimeType="*/*" /></intent-filter></activity></config-file></platform></plugin>

You can inspect the AndroidManifest.xml created in an APK, using apktool with the following command line:

$ apktool d my-app.apk
$ cd my-app
$ more AndroidManifest.xml

This technique exploits the config-file element that is described in the Cordova Plugin Specification docs and can also be used to add lines to iOS plist files. See the Cordova plugin documentation link for additional details.

Here is an example of such a plugin for modifying the iOS plist file, specifically for adding a BIS key to the plist file:

<?xml version="1.0" encoding="UTF-8"?><plugin
    xmlns="http://apache.org/cordova/ns/plugins/1.0"
    id="my-custom-bis-plugin"
    version="0.0.2"><name>My Custom BIS Plugin</name><description>Add BIS info to iOS plist file.</description><license>BSD-3</license><preference name="BIS_KEY" /><engines><engine name="cordova" version=">=3.0.0" /></engines><!-- ios --><platform name="ios"><config-file target="*-Info.plist" parent="CFBundleURLTypes"><array><dict><key>ITSAppUsesNonExemptEncryption</key><true/><key>ITSEncryptionExportComplianceCode</key><string>$BIS_KEY</string></dict></array></config-file></platform></plugin>

How can I share my Intel XDK app build?

You can send a link to your project via an email invite from your project settings page. However, a login to your account is required to access the file behind the link. Alternatively, you can download the build from the build page, onto your workstation, and push that built image to some location from which you can send a link to that image.

Why does my iOS build fail when I am able to test it successfully on a device and the emulator?

Common reasons include:

  • Your App ID specified in the project settings do not match the one you specified in Apple's developer portal.
  • The provisioning profile does not match the cert you uploaded. Double check with Apple's developer site that you are using the correct and current distribution cert and that the provisioning profile is still active. Download the provisioning profile again and add it to your project to confirm.
  • In Project Build Settings, your App Name is invalid. It should be modified to include only alpha, space and numbers.

How do I add multiple domains in Domain Access?

Here is the primary doc source for that feature.

If you need to insert multiple domain references, then you will need to add the extra references in the intelxdk.config.additions.xml file. This StackOverflow entry provides a basic idea and you can see the intelxdk.config.*.xml files that are automatically generated with each build for the <access origin="xxx" /> line that is generated based on what you provide in the "Domain Access" field of the "Build Settings" panel on the Project Tab.

How do I build more than one app using the same Apple developer account?

On Apple developer, create a distribution certificate using the "iOS* Certificate Signing Request" key downloaded from Intel XDK Build tab only for the first app. For subsequent apps, reuse the same certificate and import this certificate into the Build tab like you usually would.

How do I include search and spotlight icons as part of my app?

Please refer to this article in the Intel XDK documentation. Create anintelxdk.config.additions.xml file in your top level directory (same location as the otherintelxdk.*.config.xml files) and add the following lines for supporting icons in Settings and other areas in iOS*.

<!-- Spotlight Icon --><icon platform="ios" src="res/ios/icon-40.png" width="40" height="40" /><icon platform="ios" src="res/ios/icon-40@2x.png" width="80" height="80" /><icon platform="ios" src="res/ios/icon-40@3x.png" width="120" height="120" /><!-- iPhone Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-small.png" width="29" height="29" /><icon platform="ios" src="res/ios/icon-small@2x.png" width="58" height="58" /><icon platform="ios" src="res/ios/icon-small@3x.png" width="87" height="87" /><!-- iPad Spotlight and Settings Icon --><icon platform="ios" src="res/ios/icon-50.png" width="50" height="50" /><icon platform="ios" src="res/ios/icon-50@2x.png" width="100" height="100" />

For more information related to these configurations, visit http://cordova.apache.org/docs/en/3.5.0/config_ref_images.md.html#Icons%20and%20Splash%20Screens.

For accurate information related to iOS icon sizes, visit https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/MobileHIG/IconMatrix.html

NOTE: The iPhone 6 icons will only be available if iOS* 7 or 8 is the target.

Cordova iOS* 8 support JIRA tracker: https://issues.apache.org/jira/browse/CB-7043

Does Intel XDK support Modbus TCP communication?

No, since Modbus is a specialized protocol, you need to write either some JavaScript* or native code (in the form of a plugin) to handle the Modbus transactions and protocol.

How do I sign an Android* app using an existing keystore?

New with release 3088 of the Intel XDK, you may now import your existing keystore into Intel XDK using the new certificate manager that is built into the Intel XDK. Please read the initial paragraphs of Managing Certificates for your Intel XDK Account and the section titled "Import an Android Certificate Keystore" in that document, for details regarding how to do this.

If the above fails, please send an email to html5tools@intel.com requesting help. It is important that you send that email from the email address associated with your Intel XDK account.

How do I build separately for different Android* versions?

Under the Projects Panel, you can select the Target Android* version under the Build Settings collapsible panel. You can change this value and build your application multiple times to create numerous versions of your application that are targeted for multiple versions of Android*.

How do I display the 'Build App Now' button if my display language is not English?

If your display language is not English and the 'Build App Now' button is proving to be troublesome, you may change your display language to English which can be downloaded by a Windows* update. Once you have installed the English language, proceed to Control Panel > Clock, Language and Region > Region and Language > Change Display Language.

How do I update my Intel XDK version?

When an Intel XDK update is available, an Update Version dialog box lets you download the update. After the download completes, a similar dialog lets you install it. If you did not download or install an update when prompted (or on older versions), click the package icon next to the orange (?) icon in the upper-right to download or install the update. The installation removes the previous Intel XDK version.

How do I import my existing HTML5 app into the Intel XDK?

If your project contains an Intel XDK project file (<project-name>.xdk) you should use the "Open an Intel XDK Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round green "eject" icon, on the Projects tab). This would be the case if you copied an existing Intel XDK project from another system or used a tool that exported a complete Intel XDK project.

If your project does not contain an Intel XDK project file (<project-name>.xdk) you must "import" your code into a new Intel XDK project. To import your project, use the "Start a New Project" option located at the bottom of the Projects List on the Projects tab (lower left of the screen, round blue "plus" icon, on theProjects tab). This will open the "Samples, Demos and Templates" page, which includes an option to "Import Your HTML5 Code Base." Point to the root directory of your project. The Intel XDK will attempt to locate a file named index.html in your project and will set the "Source Directory" on the Projects tab to point to the directory that contains this file.

If your imported project did not contain an index.html file, your project may be unstable. In that case, it is best to delete the imported project from the Intel XDK Projects tab ("x" icon in the upper right corner of the screen), rename your "root" or "main" html file to index.html and import the project again. Several components in the Intel XDK depend on this assumption that the main HTML file in your project is named index.hmtl. See Introducing Intel® XDK Development Tools for more details.

It is highly recommended that your "source directory" be located as a sub-directory inside your "project directory." This insures that non-source files are not included as part of your build package when building your application. If the "source directory" and "project directory" are the same it results in longer upload times to the build server and unnecessarily large application executable files returned by the build system. See the following images for the recommended project file layout.

I am unable to login to App Preview with my Intel XDK password.

On some devices you may have trouble entering your Intel XDK login password directly on the device in the App Preview login screen. In particular, sometimes you may have trouble with the first one or two letters getting lost when entering your password.

Try the following if you are having such difficulties:

  • Reset your password, using the Intel XDK, to something short and simple.

  • Confirm that this new short and simple password works with the XDK (logout and login to the Intel XDK).

  • Confirm that this new password works with the Intel Developer Zone login.

  • Make sure you have the most recent version of Intel App Preview installed on your devices. Go to the store on each device to confirm you have the most recent copy of App Preview installed.

  • Try logging into Intel App Preview on each device with this short and simple password. Check the "show password" box so you can see your password as you type it.

If the above works, it confirms that you can log into your Intel XDK account from App Preview (because App Preview and the Intel XDK go to the same place to authenticate your login). When the above works, you can go back to the Intel XDK and reset your password to something else, if you do not like the short and simple password you used for the test.

How do I completely uninstall the Intel XDK from my system?

Take the following steps to completely uninstall the XDK from your Windows system:

  • From the Windows Control Panel, remove the Intel XDK, using the Windows uninstall tool.

  • Then:
    > cd %LocalAppData%\Intel\XDK
    > del *.* /s/q

  • Then:
    > cd %LocalAppData%\XDK
    > copy global-settings.xdk %UserProfile%
    > del *.* /s/q
    > copy %UserProfile%\global-settings.xdk .

  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

To do the same on a Linux or Mac system:

  • On a Linux machine, run the uninstall script, typically /opt/intel/XDK/uninstall.sh.
     
  • Remove the directory into which the Intel XDK was installed.
    -- Typically /opt/intel or your home (~) directory on a Linux machine.
    -- Typically in the /Applications/Intel XDK.app directory on a Mac.
     
  • Then:
    $ find ~ -name global-settings.xdk
    $ cd <result-from-above> (for example ~/Library/Application Support/XDK/ on a Mac)
    $ cp global-settings.xdk ~
    $ rm -Rf *
    $ mv ~/global-settings.xdk .

     
  • Then:
    -- Goto xdk.intel.com and select the download link.
    -- Download and install the new XDK.

Is there a tool that can help me highlight syntax issues in Intel XDK?

Yes, you can use the various linting tools that can be added to the Brackets editor to review any syntax issues in your HTML, CSS and JS files. Go to the "File > Extension Manager..." menu item and add the following extensions: JSHint, CSSLint, HTMLHint, XLint for Intel XDK. Then, review your source files by monitoring the small yellow triangle at the bottom of the edit window (a green check mark indicates no issues).

How do I delete built apps and test apps from the Intel XDK build servers?

You can manage them by logging into: https://appcenter.html5tools-software.intel.com/csd/controlpanel.aspx. This functionality will eventually be available within Intel XDK after which access to app center will be removed.

I need help with the App Security API plugin; where do I find it?

Visit the primary documentation book for the App Security API and see this forum post for some additional details.

When I install my app or use the Debug tab Avast antivirus flags a possible virus, why?

If you are receiving a "Suspicious file detected - APK:CloudRep [Susp]" message from Avast anti-virus installed on your Android device it is due to the fact that you are side-loading the app (or the Intel XDK Debug modules) onto your device (using a download link after building or by using the Debug tab to debug your app), or your app has been installed from an "untrusted" Android store. See the following official explanation from Avast:

Your application was flagged by our cloud reputation system. "Cloud rep" is a new feature of Avast Mobile Security, which flags apks when the following conditions are met:

  1. The file is not prevalent enough; meaning not enough users of Avast Mobile Security have installed your APK.
  2. The source is not an established market (Google Play is an example of an established market).

If you distribute your app using Google Play (or any other trusted market) your users should not see any warning from Avast.

Following are some of the Avast anti-virus notification screens you might see on your device. All of these are perfectly normal, they are due to the fact that you must enable the installation of "non-market" apps in order to use your device for debug and the App IDs associated with your never published app or the custom debug modules that the Debug tab in the Intel XDK builds and installs on your device will not be found in a "established" (aka "trusted") market, such as Google Play.

If you choose to ignore the "Suspicious app activity!" threat you will not receive a threat for that debug module any longer. It will show up in the Avast 'ignored issues' list. Updates to an existing, ignored, custom debug module should continue to be ignored by Avast. However, new custom debug modules (due to a new project App ID or a new version of Crosswalk selected in your project's Build Settings) will result in a new warning from the Avast anti-virus tool.

  

  

How do I add a Brackets extension to the editor that is part of the Intel XDK?

The number of Brackets extensions that are provided in the built-in edition of the Brackets editor are limited to insure stability of the Intel XDK product. Not all extensions are compatible with the edition of Brackets that is embedded within the Intel XDK. Adding incompatible extensions can cause the Intel XDK to quit working.

Despite this warning, there are useful extensions that have not been included in the editor and which can be added to the Intel XDK. Adding them is temporary, each time you update the Intel XDK (or if you reinstall the Intel XDK) you will have to "re-add" your Brackets extension. To add a Brackets extension, use the following procedure:

  • exit the Intel XDK
  • download a ZIP file of the extension you wish to add
  • on Windows, unzip the extension here:
    %LocalAppData%\Intel\XDK\xdk\brackets\b\extensions\dev
  • on Mac OS X, unzip the extension here:
    /Applications/Intel\ XDK.app/Contents/Resources/app.nw/brackets/b/extensions/dev
  • start the Intel XDK

Note that the locations given above are subject to change with new releases of the Intel XDK.

Why does my app or game require so many permissions on Android when built with the Intel XDK?

When you build your HTML5 app using the Intel XDK for Android or Android-Crosswalk you are creating a Cordova app. It may seem like you're not building a Cordova app, but you are. In order to package your app so it can be distributed via an Android store and installed on an Android device, it needs to be built as a hybrid app. The Intel XDK uses Cordova to create that hybrid app.

A pure Cordova app requires the NETWORK permission, it's needed to "jump" between your HTML5 environment and the native Android environment. Additional permissions will be added by any Cordova plugins you include with your application; which permissions are includes are a function of what that plugin does and requires.

Crosswalk for Android builds also require the NETWORK permission, because the Crosswalk image built by the Intel XDK includes support for Cordova. In addition, current versions of Crosswalk (12 and 14 at the time this FAQ was written)also require NETWORK STATE and WIFI STATE. There is an extra permission in some versions of Crosswalk (WRITE EXTERNAL STORAGE) that is only needed by the shared model library of Crosswalk, we have asked the Crosswalk project to remove this permission in a future Crosswalk version.

If you are seeing more than the following five permissions in your XDK-built Crosswalk app:

  • android.permission.INTERNET
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.INTERNET
  • android.permission.WRITE_EXTERNAL_STORAGE

then you are seeing permissions that have been added by some plugins. Each plugin is different, so there is no hard rule of thumb. The two "default" core Cordova plugins that are added by the Intel XDK blank templates (device and splash screen) do not require any Android permissions.

BTW: the permission list above comes from a Crosswalk 14 build. Crosswalk 12 builds do not included the last permission; it was added when the Crosswalk project introduced the shared model library option, which started with Crosswalk 13 (the Intel XDK does not support 13 builds).

How do I make a copy of an existing Intel XDK project?

If you just need to make a backup copy of an existing project, and do not plan to open that backup copy as a project in the Intel XDK, do the following:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)

If you want to use an existing project as the starting point of a new project in the Intel XDK. The process described below will insure that the build system does not confuse the ID in your old project with that stored in your new project. If you do not follow the procedure below you will have multiple projects using the same project ID (a special GUID that is stored inside the Intel XDK <project-name>.xdk file in the root directory of your project). Each project in your account must have a unique project ID.

  • Exit the Intel XDK.
  • Make a copy of your existing project using the process described above.
  • Inside the new project that you made (that is, your new copy of your old project), make copies of the <project-name>.xdk file and <project-name>.xdke files and rename those copies to something like project-new.xdk and project-new.xdke (anything you like, just something different than the original project name, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open your new "project-new.xdk" file (whatever you named it) and find the projectGuid line, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • Save the modified "project-new.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-new.xdk" file inside the new project folder you copied above.
  • Don't forget to change the App ID in your new project. This is necessary to avoid conflicts with the project you copied from, in the store and when side-loading onto a device.

My project does not include a www folder. How do I fix it so it includes a www or source directory?

The Intel XDK HTML5 and Cordova project file structures are meant to mimic a standard Cordova project. In a Cordova (or PhoneGap) project there is a subdirectory (or folder) named www that contains all of the HTML5 source code and asset files that make up your application. For best results, it is advised that you follow this convention, of putting your source inside a "source directory" inside of your project folder.

This most commonly happens as the result of exporting a project from an external tool, such as Construct2, or as the result of importing an existing HTML5 web app that you are converting into a hybrid mobile application (eg., an Intel XDK Corodova app). If you would like to convert an existing Intel XDK project into this format, follow the steps below:

  • Exit the Intel XDK.
  • Copy the entire project directory:
    • on Windows, use File Explorer to "right-click" and "copy" your project directory, then "right-click" and "paste"
    • on Mac use Finder to "right-click" and then "duplicate" your project directory
    • on Linux, open a terminal window, "cd" to the folder that contains your project, and type "cp -a old-project/ new-project/" at the terminal prompt (where "old-project/" is the folder name of your existing project that you want to copy and "new-project/" is the name of the new folder that will contain a copy of your existing project)
  • Create a "www" directory inside the new duplicate project you just created above.
  • Move your index.html and other source and asset files to the "www" directory you just created -- this is now your "source" directory, located inside your "project" directory (do not move the <project-name>.xdk and xdke files and any intelxdk.config.*.xml files, those must stay in the root of the project directory)
  • Inside the new project that you made above (by making a copy of the old project), rename the <project-name>.xdk file and <project-name>.xdke files to something like project-copy.xdk and project-copy.xdke (anything you like, just something different than the original project, preferably the same name as the new project folder in which you are making this new project).
  • Using a TEXT EDITOR (only) (such as Notepad or Sublime or Brackets or some other TEXT editor), open the new "project-copy.xdk" file (whatever you named it) and find the line named projectGuid, it will look something like this:
    "projectGuid": "a863c382-ca05-4aa4-8601-375f9f209b67",
  • Change the "GUID" to all zeroes, like this: "00000000-0000-0000-000000000000"
  • A few lines down find: "sourceDirectory": "",
  • Change it to this: "sourceDirectory": "www",
  • Save the modified "project-copy.xdk" file.
  • Open the Intel XDK.
  • Goto the Projects tab.
  • Select "Open an Intel XDK Project" (the green button at the bottom left of the Projects tab).
  • To open this new project, locate the new "project-copy.xdk" file inside the new project folder you copied above.

Can I install more than one copy of the Intel XDK onto my development system?

Yes, you can install more than one version onto your development system. However, you cannot run multiple instances of the Intel XDK at the same time. Be aware that new releases sometimes change the project file format, so it is a good idea, in these cases, to make a copy of your project if you need to experiment with a different version of the Intel XDK. See the instructions in a FAQ entry above regarding how to make a copy of your Intel XDK project.

Follow the instructions in this forum post to install more than one copy of the Intel XDK onto your development system.

On Apple OS X* and Linux* systems, does the Intel XDK need the OpenSSL* library installed?

Yes. Several features of the Intel XDK require the OpenSSL library, which typically comes pre-installed on Linux and OS X systems. If the Intel XDK reports that it could not find libssl, go to https://www.openssl.org to download and install it.

I have a web application that I would like to distribute in app stores without major modifications. Is this possible using the Intel XDK?

Yes, if you have a true web app or “client app” that only uses HTML, CSS and JavaScript, it is usually not too difficult to convert it to a Cordova hybrid application (this is what the Intel XDK builds when you create an HTML5 app). If you rely heavily on PHP or other server scripting languages embedded in your pages you will have more work to do. Because your Cordova app is not associated with a server, you cannot rely on server-based programming techniques; instead, you must rewrite any such code to user RESTful APIs that your app interacts with using, for example, AJAX calls.

What is the best training approach to using the Intel XDK for a newbie?

First, become well-versed in the art of client web apps, apps that rely only on HTML, CSS and JavaScript and utilize RESTful APIs to talk to network services. With that you will have mastered 80% of the problem. After that, it is simply a matter of understanding how Cordova plugins are able to extend the JavaScript API for access to features of the platform. For HTML5 training there are many sites providing tutorials. It may also help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, which will help you understand some of the differences between developing for a traditional server-based environment and developing for the Intel XDK hybrid Cordova app environment.

What is the best platform to start building an app with the Intel XDK? And what are the important differences between the Android, iOS and other mobile platforms?

There is no one most important difference between the Android, iOS and other platforms. It is important to understand that the HTML5 runtime engine that executes your app on each platform will vary as a function of the platform. Just as there are differences between Chrome and Firefox and Safari and Internet Explorer, there are differences between iOS 9 and iOS 8 and Android 4 and Android 5, etc. Android has the most significant differences between vendors and versions of Android. This is one of the reasons the Intel XDK offers the Crosswalk for Android build option, to normalize and update the Android issues.

In general, if you can get your app working well on Android (or Crosswalk for Android) first you will generally have fewer issues to deal with when you start to work on the iOS and Windows platforms. In addition, the Android platform has the most flexible and useful debug options available, so it is the easiest platform to use for debugging and testing your app.

Is my password encrypted and why is it limited to fifteen characters?

Yes, your password is stored encrypted and is managed by https://signin.intel.com. Your Intel XDK userid and password can also be used to log into the Intel XDK forum as well as the Intel Developer Zone. the Intel XDK does not store nor does it manage your userid and password.

The rules regarding allowed userids and passwords are answered on this Sign In FAQ page, where you can also find help on recovering and changing your password.

Why does the Intel XDK take a long time to start on Linux or Mac?

...and why am I getting this error message? "Attempt to contact authentication server is taking a long time. You can wait, or check your network connection and try again."

At startup, the Intel XDK attempts to automatically determine the proxy settings for your machine. Unfortunately, on some system configurations it is unable to reliably detect your system proxy settings. As an example, you might see something like this image when starting the Intel XDK.

On some systems you can get around this problem by setting some proxy environment variables and then starting the Intel XDK from a command-line that includes those configured environment variables. To set those environment variables, similar to the following:

$ export no_proxy="localhost,127.0.0.1/8,::1"
$ export NO_PROXY="localhost,127.0.0.1/8,::1"
$ export http_proxy=http://proxy.mydomain.com:123/
$ export HTTP_PROXY=http://proxy.mydomain.com:123/
$ export https_proxy=http://proxy.mydomain.com:123/
$ export HTTPS_PROXY=http://proxy.mydomain.com:123/

IMPORTANT! The name of your proxy server and the port (or ports) that your proxy server requires will be different than those shown in the example above. Please consult with your IT department to find out what values are appropriate for your site. Intel has no way of knowing what configuration is appropriate for your network.

If you use the Intel XDK in multiple locations (at work and at home), you may have to change the proxy settings before starting the Intel XDK after switching to a new network location. For example, many work networks use a proxy server, but most home networks do not require such a configuration. In that case, you need to be sure to "unset" the proxy environment variables before starting the Intel XDK on a non-proxy network.

After you have successfully configured your proxy environment variables, you can start the Intel XDK manually, from the command-line.

On a Mac, where the Intel XDK is installed in the default location, type the following (from a terminal window that has the above environment variables set):

$ open /Applications/Intel\ XDK.app/

On a Linux machine, assuming the Intel XDK has been installed in the ~/intel/XDK directory, type the following (from a terminal window that has the above environment variables set):

$ ~/intel/XDK/xdk.sh &

In the Linux case, you will need to adjust the directory name that points to the xdk.sh file in order to start. The example above assumes a local install into the ~/intel/XDK directory. Since Linux installations have more options regarding the installation directory, you will need to adjust the above to suit your particular system and install directory.

How do I generate a P12 file on a Windows machine?

See these articles:

How do I change the default dir for creating new projects in the Intel XDK?

You can change the default new project location manually by modifying a field in the global-settings.xdk file. Locate the global-settings.xdk file on your system (the precise location varies as a function of the OS) and find this JSON object inside that file:

"projects-tab": {"defaultPath": "/Users/paul/Documents/XDK","LastSortType": "descending|Name","lastSortType": "descending|Opened","thirdPartyDisclaimerAcked": true
  },

The example above came from a Mac. On a Mac the global-settings.xdk file is located in the "~/Library/Application Support/XDK" directory.

On a Windows machine the global-settings.xdk file is normally found in the "%LocalAppData%\XDK" directory. The part you are looking for will look something like this:

"projects-tab": {"thirdPartyDisclaimerAcked": false,"LastSortType": "descending|Name","lastSortType": "descending|Opened","defaultPath": "C:\\Users\\paul/Documents"
  },

Obviously, it's the defaultPath part you want to change.

BE CAREFUL WHEN YOU EDIT THE GLOBAL-SETTINGS.XDK FILE!! You've been warned...

Make sure the result is proper JSON when you are done, or it may cause your XDK to cough and hack loudly. Make a backup copy of global-settings.xdk before you start, just in case.

Where I can find recent and upcoming webinars list?

How can I change the email address associated with my Intel XDK login?

Login to the Intel Developer Zone with your Intel XDK account userid and password and then locate your "account dashboard." Click the "pencil icon" next to your name to open the "Personal Profile" section of your account, where you can edit your "Name & Contact Info," including the email address associated with your account, under the "Private" section of your profile.

What network addresses must I enable in my firewall to insure the Intel XDK will work on my restricted network?

Normally, access to the external servers that the Intel XDK uses is handled automatically by your proxy server. However, if you are working in an environment that has restricted Internet access and you need to provide your IT department with a list of URLs that you need access to in order to use the Intel XDK, then please provide them with the following list of domain names:

  • appcenter.html5tools-software.intel.com (for communication with the build servers)
  • s3.amazonaws.com (for downloading sample apps and built apps)
  • download.xdk.intel.com (for getting XDK updates)
  • debug-software.intel.com (for using the Test tab weinre debug feature)
  • xdk-feed-proxy.html5tools-software.intel.com (for receiving the tweets in the upper right corner of the XDK)
  • signin.intel.com (for logging into the XDK)
  • sfederation.intel.com (for logging into the XDK)

Normally this should be handled by your network proxy (if you're on a corporate network) or should not be an issue if you are working on a typical home network.

I cannot create a login for the Intel XDK, how do I create a userid and password to use the Intel XDK?

If you have downloaded and installed the Intel XDK but are having trouble creating a login, you can create the login outside the Intel XDK. To do this, go to the Intel Developer Zone and push the "Join Today" button. After you have created your Intel Developer Zone login you can return to the Intel XDK and use that userid and password to login to the Intel XDK. This same userid and password can also be used to login to the Intel XDK forum.

Installing the Intel XDK on Windows fails with a "Package signature verification failed." message.

If you receive a "Package signature verification failed" message (see image below) when installing the Intel XDK on your system, it is likely due to one of the following two reasons:

  • Your system does not have a properly installed "root certificate" file, which is needed to confirm that the install package is good.
  • The install package is corrupt and failed the verification step.

The first case can happen if you are attempting to install the Intel XDK on an unsupported version of Windows. The Intel XDK is only supported on Microsoft Windows 7 and higher. If you attempt to install on Windows Vista (or earlier) you may see this verification error. The workaround is to install the Intel XDK on a Windows 7 or greater machine.

The second case is likely due to a corruption of the install package during download or due to tampering. The workaround is to re-download the install package and attempt another install.

If you are installing on a Windows 7 (or greater) machine and you see this message it is likely due to a missing or bad root certificate on your system. To fix this you may need to start the "Certificate Propagation" service. Open the Windows "services.msc" panel and then start the "Certificate Propagation" service. Additional links related to this problem can be found here > https://technet.microsoft.com/en-us/library/cc754841.aspx

See this forum thread for additional help regarding this issue > https://software.intel.com/en-us/forums/intel-xdk/topic/603992

Troubles installing the Intel XDK on a Linux or Ubuntu system, which option should I choose?

Choose the local user option, not root or sudo, when installing the Intel XDK on your Linux or Ubuntu system. This is the most reliable and trouble-free option and is the default installation option. This will insure that the Intel XDK has all the proper permissions necessary to execute properly on your Linux system. The Intel XDK will be installed in a subdirectory of your home (~) directory.

Inactive account/ login issue/ problem updating an APK in store, How do I request account transfer?

As of June 26, 2015 we migrated all of Intel XDK accounts to the more secure intel.com login system (the same login system you use to access this forum).

We have migrated nearly all active users to the new login system. Unfortunately, there are a few active user accounts that we could not automatically migrate to intel.com, primarily because the intel.com login system does not allow the use of some characters in userids that were allowed in the old login system.

If you have not used the Intel XDK for a long time prior to June 2015, your account may not have been automatically migrated. If you own an "inactive" account it will have to be manually migrated -- please try logging into the Intel XDK with your old userid and password, to determine if it no longer works. If you find that you cannot login to your existing Intel XDK account, and still need access to your old account, please send a message to html5tools@intel.com and include your userid and the email address associated with that userid, so we can guide you through the steps required to reactivate your old account.

Alternatively, you can create a new Intel XDK account. If you have submitted an app to the Android store from your old account you will need access to that old account to retrieve the Android signing certificates in order to upgrade that app on the Android store; in that case, send an email to html5tools@intel.com with your old account username and email and new account information.

Connection Problems? -- Intel XDK SSL certificates update

On January 26, 2016 we updated the SSL certificates on our back-end systems to SHA2 certificates. The existing certificates were due to expire in February of 2016. We have also disabled support for obsolete protocols.

If you are experiencing persistent connection issues (since Jan 26, 2016), please post a problem report on the forum and include in your problem report:

  • the operation that failed
  • the version of your XDK
  • the version of your operating system
  • your geographic region
  • and a screen capture

How do I resolve build failure: "libpng error: Not a PNG file"?  

f you are experiencing build failures with CLI 5 Android builds, and the detailed error log includes a message similar to the following:

Execution failed for task ':mergeArmv7ReleaseResources'.> Error: Failed to run command: /Developer/android-sdk-linux/build-tools/22.0.1/aapt s -i .../platforms/android/res/drawable-land-hdpi/screen.png -o .../platforms/android/build/intermediates/res/armv7/release/drawable-land-hdpi-v4/screen.png

Error Code: 42

Output: libpng error: Not a PNG file

You need to change the format of your icon and/or splash screen images to PNG format.

The error message refers to a file named "screen.png" -- which is what each of your splash screens were renamed to before they were moved into the build project resource directories. Unfortunately, JPG images were supplied for use as splash screen images, not PNG images. So the files were renamed and found by the build system to be invalid.

Convert your splash screen images to PNG format. Renaming JPG images to PNG will not work! You must convert your JPG images into PNG format images using an appropriate image editing tool. The Intel XDK does not provide any such conversion tool.

Beginning with Cordova CLI 5, all icons and splash screen images must be supplied in PNG format. This applies to all supported platforms. This is an undocumented "new feature" of the Cordova CLI 5 build system that was implemented by the Apache Cordova project.

Why do I get a "Parse Error" when I try to install my built APK on my Android device?

Because you have built an "unsigned" Android APK. You must click the "signed" box in the Android Build Settings section of the Projects tab if you want to install an APK on your device. The only reason you would choose to create an "unsigned" APK is if you need to sign it manually. This is very rare and not the normal situation.

My converted legacy keystore does not work. Google Play is rejecting my updated app.

The keystore you converted when you updated to 3088 (now 3240 or later) is the same keystore you were using in 2893. When you upgraded to 3088 (or later) and "converted" your legacy keystore, you re-signed and renamed your legacy keystore and it was transferred into a database to be used with the Intel XDK certificate management tool. It is still the same keystore, but with an alias name and password assigned by you and accessible directly by you through the Intel XDK.

If you kept the converted legacy keystore in your account following the conversion you can download that keystore from the Intel XDK for safe keeping (do not delete it from your account or from your system). Make sure you keep track of the new password(s) you assigned to the converted keystore.

There are two problems we have experienced with converted legacy keystores at the time of the 3088 release (April, 2016):

  • Using foreign (non-ASCII) characters in the new alias name and passwords were being corrupted.
  • Final signing of your APK by the build system was being done with RSA256 rather than SHA1.

Both of the above items have been resolved and should no longer be an issue.

If you are currently unable to complete a build with your converted legacy keystore (i.e., builds fail when you use the converted legacy keystore but they succeed when you use a new keystore) the first bullet above is likely the reason your converted keystore is not working. In that case we can reset your converted keystore and give you the option to convert it again. You do this by requesting that your legacy keystore be "reset" by filling out this form. For 100% surety during that second conversion, use only 7-bit ASCII characters in the alias name you assign and for the password(s) you assign.

IMPORTANT: using the legacy certificate to build your Android app is ONLY necessary if you have already published an app to an Android store and need to update that app. If you have never published an app to an Android store using the legacy certificate you do not need to concern yourself with resetting and reconverting your legacy keystore. It is easier, in that case, to create a new Android keystore and use that new keystore.

If you ARE able to successfully build your app with the converted legacy keystore, but your updated app (in the Google store) does not install on some older Android 4.x devices (typically a subset of Android 4.0-4.2 devices), the second bullet cited above is likely the reason for the problem. The solution, in that case, is to rebuild your app and resubmit it to the store (that problem was a build-system problem that has been resolved).

How can I have others beta test my app using Intel App Preview?

Apps that you sync to your Intel XDK account, using the Test tab's green "Push Files" button, can only be accessed by logging into Intel App Preview with the same Intel XDK account credentials that you used to push the files to the cloud. In other words, you can only download and run your app for testing with Intel App Preview if you log into the same account that you used to upload that test app. This restriction applies to downloading your app into Intel App Preview via the "Server Apps" tab, at the bottom of the Intel App Preview screen, or by scanning the QR code displayed on the Intel XDK Test tab using the camera icon in the upper right corner of Intel App Preview.

If you want to allow others to test your app, using Intel App Preview, it means you must use one of two options:

  • give them your Intel XDK userid and password
  • create an Intel XDK "test account" and provide your testers with that userid and password

For security sake, we highly recommend you use the second option (create an Intel XDK "test account"). 

A "test account" is simply a second Intel XDK account that you do not plan to use for development or builds. Do not use the same email address for your "test account" as you are using for your main development account. You should use a "throw away" email address for that "test account" (an email address that you do not care about).

Assuming you have created an Intel XDK "test account" and have instructed your testers to download and install Intel App Preview; have provided them with your "test account" userid and password; and you are ready to have them test:

  • sign out of your Intel XDK "development account" (using the little "man" icon in the upper right)
  • sign into your "test account" (again, using the little "man" icon in the Intel XDK toolbar)
  • make sure you have selected the project that you want users to test, on the Projects tab
  • goto the Test tab
  • make sure "MOBILE" is selected (upper left of the Test tab)
  • push the green "PUSH FILES" button on the Test tab
  • log out of your "test account"
  • log into your development account

Then, tell your beta testers to log into Intel App Preview with your "test account" credentials and instruct them to choose the "Server Apps" tab at the bottom of the Intel App Preview screen. From there they should see the name of the app you synced using the Test tab and can simply start it by touching the app name (followed by the big blue and white "Launch This App" button). Staring the app this way is actually easier than sending them a copy of the QR code. The QR code is very dense and is hard to read with some devices, dependent on the quality of the camera in their device.

Note that when running your test app inside of Intel App Preview they cannot test any features associated with third-party plugins, only core Cordova plugins. Thus, you need to insure that those parts of your apps that depend on non-core Cordova plugins have been disabled or have exception handlers to prevent your app from either crashing or freezing.

I'm having trouble making Google Maps work with my Intel XDK app. What can I do?

There are many reasons that can cause your attempt to use Google Maps to fail. Mostly it is due to the fact that you need to download the Google Maps API (JavaScript library) at runtime to make things work. However, there is no guarantee that you will have a good network connection, so if you do it the way you are used to doing it, in a browser...

<script src="https://maps.googleapis.com/maps/api/js?key=API_KEY&sensor=true"></script>

...you may get yourself into trouble, in an Intel XDK Cordova app. See Loading Google Maps in Cordova the Right Way for an excellent tutorial on why this is a problem and how to deal with it. Also, it may help to read Five Useful Tips on Getting Started Building Cordova Mobile Apps with the Intel XDK, especially item #3, to get a better understanding of why you shouldn't use the "browser technique" you're familiar with.

An alternative is to use a mapping tool that allows you to include the JavaScript directly in your app, rather than downloading it over the network each time your app starts. Several Intel XDK developers have reported very good luck with the open-source JavaScript library named LeafletJS that uses OpenStreet as it's map database source.

You can also search the Cordova Plugin Database for Cordova plugins that implement mapping features, in some cases using native SDKs and libraries.

Back to FAQs Main

Tutorial: Enumerating Modules and Camera Devices

$
0
0

Abstract

Enumeration of the feature modules and multiple camera devices is an important component in the logic of an application for selecting the appropriate device. This tutorial presents a method for enumerating modules and multiple devices such that an appropriate selection can be made.

Tutorial

Finding the camera devices that are attached to a system along with the capabilities they provide can be simplified by enumerating the devices. The Intel® RealSense™ SDK 2016 R1 provides a mechanism via PXCSession::ImplDesc and PXCCapture::DeviceInfo that enables developers to get back information such as a device-friendly name, supported modules, and more.

This tutorial demonstrates the Intel RealSense SDK classes required for initialization and subsequent module and device enumeration.

Initialization

A usage example for initializing the core Intel RealSense SDK handles is implemented to enable creation of a PXCSession at any point in the application.

int main(int argc, char *argv[])
try
{
    PXCSession *pSession;
    PXCSession::ImplDesc *pDesc;
    PXCCapture *pCapture;
    PXCSenseManager *pSenseManager;

    // Initialize
    pSession = PXCSession::CreateInstance();
    pDesc = new PXCSession::ImplDesc();

    pDesc->group = PXCSession::ImplGroup::IMPL_GROUP_SENSOR;
    pDesc->subgroup = PXCSession::ImplSubgroup::IMPL_SUBGROUP_VIDEO_CAPTURE;

Enumeration

Enumeration of the modules and the devices is performed by iterating over the modules of PXCSession::ImplDesc and getting the friendly name, and then by iterating over the specific PXCCapture::DeviceInfo and querying the device. In this way it is possible to capture the modules and the device capabilities for each camera that is connected to the system.

        // Enumerate Devices
    std::string temp;

// iterating over the present modules
    for (int m = 0; ; m++)
    {
        PXCSession::ImplDesc desc2;
        if (pSession->QueryImpl(pDesc, m, &desc2) < pxcStatus::PXC_STATUS_NO_ERROR)
        {
            break;
        }
        //temp = format("Module[%d]:  %d", m, desc2.friendlyName);
        wstring ws(desc2.friendlyName); string str(ws.begin(), ws.end());
        std::cout << "Module["<< m << "]:  "<< str.c_str() << std::endl;

        PXCCapture *pCap;
        pSession->CreateImpl<PXCCapture>(&desc2, &pCap);

        // interating over the devices
        for (int d = 0; ; d++)
        {
            PXCCapture::DeviceInfo dinfo;
            if (pCap->QueryDeviceInfo(d, &dinfo) < pxcStatus::PXC_STATUS_NO_ERROR)
            {
                break;
            };
            wstring ws(dinfo.name); string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;

            /*wstring ws(dinfo.orientation);    string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;

            wstring ws(dinfo.model);    string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;*/

        }
}

Please note the outer loop required to iterate over the present module and the inner loop required to iterate over the attached devices.

Conclusion

Enumerating camera devices is an important step in any application that may require selecting a specific camera device when several cameras are attached to a system. This tutorial presented a straightforward camera enumeration scheme for developers who want to build any necessary selection logic once the particular camera device and capabilities are identified. A full usage example can be found in the Appendix 1 of this tutorial.

About the Author

Rudy Cazabon is a member of the Intel Software Innovator program and is an avid technologist in the area of graphics for games and computer vision.

Resources

Intel RealSense SDK Documentation

A Developer’s Guide To Intel® RealSense™ Camera Detection Methods

Coding for 2 at Once - Intel RealSense User Facing Cameras

Appendix 1 – Sample source code

#include <windows.h>

#include <iostream>
#include <string>
#include <cstdio>
//
#include "pxcbase.h"
#include "pxcsensemanager.h"
#include "pxcmetadata.h"
#include "service/pxcsessionservice.h"

#include "pxccapture.h"
#include "pxccapturemanager.h"

using namespace std;

int main(int argc, char *argv[])
try
{
    PXCSession *pSession;
    PXCSession::ImplDesc *pDesc;
    PXCCapture *pCapture;
    PXCSenseManager *pSenseManager;

    // Initialize
    pSession = PXCSession::CreateInstance();
    pDesc = new PXCSession::ImplDesc();

    pDesc->group = PXCSession::ImplGroup::IMPL_GROUP_SENSOR;
    pDesc->subgroup = PXCSession::ImplSubgroup::IMPL_SUBGROUP_VIDEO_CAPTURE;

    // Enumerate Devices
    std::string temp;

    for (int m = 0; ; m++)
    {
        PXCSession::ImplDesc desc2;
        if (pSession->QueryImpl(pDesc, m, &desc2) < pxcStatus::PXC_STATUS_NO_ERROR)
        {
            break;
        }
        //temp = format("Module[%d]:  %d", m, desc2.friendlyName);
        wstring ws(desc2.friendlyName); string str(ws.begin(), ws.end());
        std::cout << "Module["<< m << "]:  "<< str.c_str() << std::endl;

        PXCCapture *pCap;
        pSession->CreateImpl<PXCCapture>(&desc2, &pCap);

        // print out all device information
        for (int d = 0; ; d++)
        {
            PXCCapture::DeviceInfo dinfo;
            if (pCap->QueryDeviceInfo(d, &dinfo) < pxcStatus::PXC_STATUS_NO_ERROR)
            {
                break;
            };
            wstring ws(dinfo.name); string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;

            /*wstring ws(dinfo.orientation);    string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;

            wstring ws(dinfo.model);    string str(ws.begin(), ws.end());
            std::cout << "Device["<< d << "]:  "<< str.c_str() << std::endl;*/

        }
    }


    cin.clear();
    cout << endl << "Press any key to continue...";
    cin.ignore();

    return 0;
}
catch (const char *c)
{
    std::cerr << "Program aborted: "<< c << "\n";
    MessageBox(GetActiveWindow(), (LPCWSTR)c, L"FAIL", 0);
}
catch (std::exception e)
{
    std::cerr << "Program aborted: "<< e.what() << "\n";
    MessageBox(GetActiveWindow(), (LPCWSTR)e.what(), L"FAIL", 0);
}

Telco-Grade Service Chaining

$
0
0

Introduction

Service chaining is an emerging set of technologies and processes that enable telecom service providers to configure network services dynamically in software without having to make changes to the network at the hardware level. Network Function Virtualization (NFV) is an initiative to virtualize and cloudify telecom services that are currently being carried out by proprietary software and hardware.

In order for service chaining to be fit for purpose for NFV deployments, better methods are required to improve resilience and availability of service chains.

To highlight these gaps, a team from the NPG Communications Infrastructure Division in Intel Shannon (Ireland) developed two proof-of-concept (PoC) demos. These demos were presented in the Intel booth at NFV World Congress 2016.

  • Deterministic Service Chaining Using a Virtualized Service Assurance Manager (vSAM)
  • Service Chain Performance Monitoring Using Network Service Header (NSH)

While implementations for demonstration purpose were developed with prototype level code, it is hoped to further advance these concepts with open-source implementations. Integration of vSAM into the OpenStack Tacker project is under consideration with PoC partners as well as contribution of deterministic service chaining logic into the Open Daylight project. As regards the Service Chain Performance Monitoring PoC, it is planned to contribute the key components for NSH time-stamping as sample applications into the Data Plane Development Kit (DPDK) open-source project, pending approval of the IETF draft.

Concepts were demonstrated for Virtual Gi-LAN (vGi-LAN) but may also be applied to other use cases such as Virtual Customer Premises Equipment (vCPE) and Mobile Edge Cloud (MEC).

Deterministic Service Chaining Using vSAM

A Gi-LAN is the part of the network that connects the mobile network to data networks, such as the Internet and operator cloud services (see Figure 1). The Gi-LAN contains assorted appliances running applications such as Deep Packet Inspection (DPI), Firewall, URL Filtering, Network Address Translation (NAT), Video and Web optimizers, Session Border Controllers, and so on. In an NFV deployment these service functions run as Virtual Network Functions (VNFs), and it is advantageous for service provides to chain these VNFs as service chains for flexibility and maintainability.

Gi-LAN overview

Figure 1:Gi-LAN overview

In order for service chaining for NFV use cases such as vGi-LAN to meet the 5 x 9s reliability requirement, better methods are required to monitor the performance of service chain entities. Furthermore it is proposed that an open API is required to inform a controller of significant events that impact service chain performance. Finally, intelligent controllers with deterministic logic are needed to perform remedial actions based on such an API.

Figure 2 shows an overview of the vSAM concept, which may address these requirements.  vSAM performs real-time monitoring of key performance indicators (KPIs) such as CPU usage, memory usage, network i/o and disk i/o for VNFs, and bandwidth usage, packet-loss, delay and delay variation KPIs for WAN links between NFV sites. It also provides an API which enables an intelligent service chaining controller to use this KPI information to ensure that service chains utilize the optimum path across multiple sites

The vSAM PoC was implemented as an ETSI NFV PoC in collaboration with Telenor, Brocade, and Creanord. See reference [2] in the Links section for the ETSI wiki page for this PoC, which includes the PoC proposal and final report.

vSAM overview

Figure 2:vSAM overview

Figure 3 shows a more detailed view of how vSAM would operate in a network, taking the example of three Gi-LAN sites.

For the purpose of the PoC, three Gi-LAN sites were simulated with traffic initiated through Gi-LAN A. The VNFs on the Gi-LAN B and C sites effectively act as hot backups for Gi-LAN A, and they are being continuously monitored for their suitability as alternate paths for Gi-LAN A service chains for voice and video.

vSAM proof-of-concept architecture overview

Figure 3:vSAM proof-of-concept architecture overview

The key points of the architecture are as follows:

  • An implementation of vSAM was co-developed by Brocade and Intel. vSAM uses southbound HTTP REST APIs for monitors, which provide KPI data related to the health of WAN links between sites and VNF resource usage. It also has a northbound API to a service chaining controller on which it can send notifications in case of KPI violations
  • A virtual network probe product from Creanord was integrated with vSAM through a HTTP REST-API. This uses OAM protocols such as TWAMP, Y.1731, and UDP Echo to measure latency, jitter, and packet loss between Gi-LAN sites.
  • A lightweight VNF resource-usage monitor based on the collectd libvirt plug-in was implemented to track KPIs for CPU usage, memory usage, disk i/o and network interface load for each VNF in a Gi-LAN site. This component was also integrated with vSAM through a HTTP REST-API.
  • An intelligent SFC (Service Function Chaining) controller was developed by Brocade to support deterministic service chaining based on vSAM.
  • A MongoDB* NoSQL database was used as the vSAM KPI data repository. VNF and site interconnect WAN link KPI data for all three sites is written to this repository by the vSAM instance on each site.

Figures 4 and 5 show the infrastructure view for the vSAM demo system simulation of three Gi-LANs as deployed in the Intel lab. The system is deployed on three Intel® Wildcat Pass Servers with Dual Xeon E5-2699 v3 18-core (Haswell) CPUs and Intel® 82599 10Gbe (Niantic) NICs. The servers are interconnected by a Brocade TurboIron* 24 x 10Gbe Switch.

Overview of the vSAM proof-of-concept demo infrastructure

Figure 4:Overview of the vSAM proof-of-concept demo infrastructure

Overview of the vSAM proof-of-concept demo infrastructure – Gi-LAN A

Figure 5:Overview of the vSAM proof-of-concept demo infrastructure – Gi-LAN A

The key points of the infrastructure are as follows:

  • Brocade Vyatta* virtual routers were used to simulate data center routers, such as Data Center Interconnect (DCI), Data Center Edge (DCE), Internet Peering Router (IPR), and the Gilan vRouter simulates the routing of traffic coming from the mobile network via the Packet-Gateway.
  • The Creanord EchoVault* vProbe is attached to the DCI vRouter for sending test traffic between Gi-LAN sites to monitor the health of the site interconnect links.
  • The host shown in the diagram is a virtual machine (VM) that simulates a physical server. VNFs run as Linux* containers under this VM./li>
  • OSS components (on the right side of the host VM) run as a mix of Linux containers and VMs and communicate over the OSS control plane subnet.

Data-plane traffic is routed as follows (uplink direction):

  • The Gi-LAN vRouter performs the first level of classification to determine whether traffic should be routed through service chains, that is, if traffic is detected as video or voice.
  • The DCE vRouter determines whether traffic should be routed on the video or voice service chains and forwards traffic to the first VNF node in the service chain.
  • If the service chain can be fulfilled by local VNFs, the service-chain path will be a number of hops along the local VNFs.
  • If one of the local VNFs is unavailable or overloaded, the service chain may be re-directed by the DCE vRouter to another site’s VNF and the service-chain path is completed there.

    NOTE: the next hop for a VNF’s uplink traffic is determined by its IP routing table, which is set by the service chaining controller via a HTTP REST API.
  • The last node in the service chain routes traffic out to the IPR and thus out to the Internet.

For the downlink direction, traffic passes through vRouters and service chain VNFs in the reverse direction. The service chaining controller also sets the next hop in VNFs for downlink traffic.

A graphical user interface (GUI) was developed in collaboration with Armour Interactive for the purpose of demonstrating PoC use-cases.

The GUI screen represents three Gi-LAN sites and displays the status of the VNFs on all sites, the status of the WAN links between sites, and the status of Gi-LAN A service chains for voice and video. The information that renders the status of these entities to the screen is being read from the vSAM repository, which holds KPI information for the three sites. A HTTP REST-API was developed to allow VNFs and inter-site links to be impaired.

Figures 6–8 show screenshots for the key use cases.

The first screenshot (see Figure 6) demonstrates normal conditions, when all local VNFs are healthy. In this case the video (blue line) and voice (pink line) service chains are fulfilled within the local Gi-LAN A site.

As Figure 6 shows, video traffic is flowing through the Content Delivery Network (CDN) in Gi-LAN A.

vSAM GUI screenshot - normal conditions

Figure 6:vSAM GUI screenshot - normal conditions

The screenshot in Figure 7 shows how the CDN VNF on Gi-LAN A can be impaired using the impairment API.

The impairment API call invokes Linux stress commands to be run on the CDN VNF on Gi-LAN A to artificially increase CPU usage. The CPU usage KPI now exceeds the preconfigured threshold value in vSAM, so it sends a notification to the service chaining controller of a KPI violation event.

The service chaining controller analyzes KPI data in the vSAM repository and determines that the video service chain path should now be routed through the CDN on Gi-LAN B.

As Figure 7 shows, video traffic is flowing through the CDN in Gi-LAN B.

vSAM GUI screenshot – Gi-LAN A CDN impaired

Figure 7:vSAM GUI screenshot – Gi-LAN A CDN impaired

In the Figure 8 GUI screenshot, again using the impairment API, the WAN link between Gi-LAN A and Gi-LAN B is impaired. The API call invokes Linux traffic control commands to be run on Gi-LAN A to impair the latency on the vProbe link with Gi-LAN B.

The latency KPI on the link between Gi-LAN A and Gi-LAN B now exceeds the preconfigured threshold value in vSAM, which triggers a notification by vSAM to the service-chaining controller of a KPI violation event.

The service-chaining controller analyzes KPI data in the vSAM repository and determines that the video service chain path should now be routed through the CDN on Gi-LAN C.

As Figure 8 shows, video traffic is flowing through the CDN in Gi-LAN C.

vSAM GUI screenshot – Gi-LAN A CDN and Gi-LAN A-B link impaired

Figure 8:vSAM GUI screenshot – Gi-LAN A CDN and Gi-LAN A-B link impaired

Service Chain Performance Monitoring by NSH

A second PoC was developed and demonstrated at NFV World Congress 2016 to illustrate how Network Service Header (NSH) can be used for real-time inline performance monitoring of service chains.

NSH is a new protocol for service-chaining which enables information about a service chain to be carried as headers in the actual data packet. NSH supports user-defined metadata, which the PoC team used to define a header structure for packet time-stamping. This enables packets to be time-stamped at significant points as they traverse a service chain such as VNF ingress and egress points.

An IETF draft has been submitted for this NSH time-stamping feature. See reference [4] in the Links section for this IETF draft.

Figure 9 shows an overview of the Service Chaining Performance Monitoring demo system. The use case of Gi-LAN service chaining is again used to demonstrate this concept, with three Gi-LAN sites simulated on three Intel® Wildcat Pass Servers with Dual Xeon E5-2699 v3 18-core (Haswell) CPUs. A fourth Haswell server is used for running NSH time-stamping applications such as the controller, database, REST API and GUI.

Overview of service chain performance monitoring demo

Figure 9:Overview of service chain performance monitoring demo

A traffic generator is continuously generating a preconfigured set of flows through service chains that traverse VNFs on the three Haswell servers shown in the figure. Each server uses two Intel® X710 4x10Gbe (Fortville) NICs with modified firmware supporting NSH filtering for Service Function platforms.

VNFs are simulated by Data Plane Development Kit (DPDK) sample applications with Fortville SR-IOV network interfaces. Service chains are set up by statically configuring the next hop for a chain of VNFs

DPDK-based IEEE 1588 Precision Time Protocol (PTP) is used to synchronize time across all VNFs with micro-second precision. The master PTP thread runs in the NSH-Time-stamping Gateway node shown in Figure 9, and PTP client threads running in each VNF synchronize their time from this. See reference [6] in the Links section for further detail on the DPDK PTP feature.

The NSH Time-stamping Gateway node as shown in Figure 9 provides an API that can be used to start and stop insertion of time-stamping control headers into packets. This informs VNFs to time-stamp packets at their ingress and egress points as packets traverse a chain.

The last VNF node in a service chain forwards the NSH time-stamped packet to the Time-stamp Database (TS-DB) node as shown in Figure 9. The TS-DB node strips the time-stamp information from the packet and writes it to a MongoDB* database. Database querys based on service chain ID or VNF ID can thus be run on this database to determine delay along a service chain or within a VNF.

For the PoC demo, an impairment API was implemented to simulate performance degradation of a VNF or a vLink in order to show how this is detected by Service Chain Performance Monitoring.

Again a GUI was developed in collaboration with Armour Interactive to demonstrate PoC use-cases.

Service chaining performance monitoring GUI – Hop-by-hop graph for normal conditions

Figure 10:Service chaining performance monitoring GUI – Hop-by-hop graph for normal conditions

Similarly to the vSAM GUI, a view of three Gi-LAN sites is presented. For this PoC, service chains are static as the objective is to demonstrate the concept of service-chain performance monitoring using NSH. Both the NSH-TS Gateway API for NSH time-stamping of flows and the impairment API can be called from the GUI, so that specific NSH time-stamping tests can be managed. A HTTP TEST API was also implemented to allow the GUI to gather service chain timestamp data from the TS-DB in order to render the service chain performance graph as seen in Figure 10.

The GUI screenshot in Figure 10 shows the hop-by-hop view for all three service chains (Chain 1, Chain 2, and Chain 3). The graph has been rendered by NSH time-stamping data from the TS-DB database. The Y-axis of the graph is the delay in time as packets traverse the service chain and the X-axis shows the VNFs that packets traverse along the chain. The dots on the Y-axis show the delay between the ingress and egress point in a VNF, while the dashed line indicates the delay in the vLinks between VNFs.

Thus, a delay in VNF processing or the vLinks between VNFs can be diagnosed in real time.  This is illustrated in the GUI screenshot in Figure 11 by using the VNF impairment API to artificially insert a delay into VNF processing in the SBC VNF on Gi-LAN B (on Chain 1) and the FW VNF on Gi-LAN C (on Chain 3).

The effect on the service chain performance graph of the increased delay in VNF processing for Gi-LAN B SBC and Gi-LAN C FW is immediately visible in the longer line between VNF ingress and egress points on the Y-axis.

Service chaining performance monitoring GUI – Hop-by-hop graph for impaired VNFs

Figure 11:Service chaining performance monitoring GUI – Hop-by-hop graph for impaired VNFs

In the initial phase of development the NSH time-stamping data is just used to render service chain performance graphs on a GUI. However an API could be provided to vSAM to monitor service chain KPI thresholds and to inform a controller to adapt service chains to address performance hotspots.

Conclusion

The Telco-grade Service Chaining PoC team received a lot of positive feedback on the demos presented at NFV World Congress 2016 from several service providers as well as software and equipment vendors. The general feedback was that the concepts are innovative and will enhance service assurance for NFV deployments.

In particular, customers agreed that to run service chains in a production environment with 5 x 9s reliability this minimal level of monitoring of service chain entities (and possibly more) is required.

The concepts presented in the demos described here enable new ways to address the stringent requirements for service assurance in NFV deployments.

Links

  1. NFV World Congress 2016 keynote on Telco-grade Service Chaining (Rory Browne)
    http://www.layer123.com/download&doc=Intel-0416-Browne-Telco_Grade_Service_Chaining
  2. ETSI NFV PoC wiki page - Virtualized Service Assurance Management (vSAM) in vGi-LAN:
    http://nfvwiki.etsi.org/index.php?title=Virtualised_service_assurance_management_in_vGi-LAN
  3. Brocade blog on vSAM:
    http://community.brocade.com/t5/SDN-NFV/Service-Aware-Transport-for-Multi-site-NFV-Resiliency/ba-p/84943
  4. NSH Time-stamping IETF Draft:
    https://tools.ietf.org/html/draft-browne-sfc-nsh-timestamp-00
  5. NSH IETF Draft:
    https://tools.ietf.org/pdf/draft-ietf-sfc-nsh-04.pdf
  6. Data Plane Development Kit (DPDK)
    http://dpdk.org

About the Author

Brendan Ryan is a senior software engineer in the Communication Infrastructure Division of Intel’s Network Platform Group (NPG), based in Intel Shannon (Ireland). Brendan has over 20 years’ experience in telecoms software development and has recently been working on PoC development and customer enablement towards the adoption of SDN and NFV technologies in telecoms networks.

Tutorial: Camera Device Connection State

$
0
0

Tutorial

Determining the connection state for a camera device is readily available to any Intel® RealSense™ application by directly querying the SenseManager. Once established, the connection state can further be monitored by subscribing to Intel® RealSense™ SDK callback.

Determine Connection State Directly

int main()

try

{

    auto pSession = PXCSession::CreateInstance();

    auto pSenseManager = pSession->CreateSenseManager();



    if (pSenseManager->Init() == PXC_STATUS_NO_ERROR)

    {

        if (pSenseManager->IsConnected()) {

            std::cout << "Camera is connected"<< endl;

        }

        else {

            std::cerr << "Please connect your camera"<< std::endl;

            throw("Camera not connected");

        }

Please be advised that the IsConnected method is only valid between Init() and Close() functions.

Determine Connection State via Callbacks

Defining the Handler

The handler for the event is derived from the PXCCapture::Handler. Therein one must implement the OnDeviceListChanged() method.

class MyHandler : public PXCCapture::Handler {

public:

    virtual void PXCAPI OnDeviceListChanged(void) {

        std::cerr << "Camera has been unplugged"<< std::endl;

    }

};


MyHandler gHandler;
Per the Intel RealSense SDK documentation, please ensure that the application does not perform any lengthy operations in the callback function.

Registering the Handler

The next step involves registering the handler via the SubscribeCaptureCallbacks function.  This registers the set of callback functions for any camera device events. In the above case, this maps to the OnDeviceListChanged method function, which is called when there is a change in the device list. Here it is possible to enumerate the device list via QueryDeviceInfo as per the Enumerating Modules and Camera Devices tutorial.

std::cout << "\nPXCSenseManager Initializing OK\n========================\n";

auto pCaptureMgr = pSenseManager->QueryCaptureManager();

auto pDevice = pCaptureMgr->QueryDevice();

PXCCapture::DeviceInfo deviceInfo;

pDevice->QueryDeviceInfo(&deviceInfo);

auto pCapture = pCaptureMgr->QueryCapture();

pCapture->SubscribeCaptureCallbacks(&gHandler);

	  . . . if all fails, release resources . . .

Conclusion

It is possible to determine the camera device connection state statically by directly querying the SenseManager. You can also dynamically get the connection state by using the combination of the PXCCapture::SubscribeCaptureCallbacks() method and mapping to the OnDeviceListChanged() method of the derived as shown above.

Used in combination, you can get the static and dynamic camera device connection state at any time during the life cycle of application, and logic can be developed to counter changes in the connection state.

About the Author

Rudy Cazabon is a member of the Intel Software Innovator program and is an avid technologist in the area of graphics for games and computer vision.

Resources

Intel® RealSense™ SDK Documentation

A Developer’s Guide To Intel® RealSense™ Camera Detection Methods

Coding for 2 at Once - Intel RealSense User Facing Cameras

(Pending completion) Enumerating Modules and Camera Devices

The Customer Journey Funnel: Think 30–3–3–30

$
0
0

Finding and winning over customers is fundamental to the success of any product. Often, new businesses get caught up on the customer acquisition part, trying to find as many people as possible and bring them in the door, but they forget to focus on winning them over. When it comes to apps, that means making sure users’ initial interactions with your product —after 30 seconds and after 3 minutes—are as positive as possible, and also considering ways to keep them engaged 3 days and 30 days after download. It’s not enough for users to download your app—you want users who stay and come back for more.

Read on to learn more about the customer's journey, and how you can make sure you're engaging and enticing them every step of the way.
 

Customer Journey Funnel

App revenue is not as simple as the number of purchases multiplied by the cost of the app—there's a journey involved for a user, and the way you interact with them along the way will make a difference on your bottom line. A better way to think about your user engagement is through an equation like this:

In other words, the more customers you attract, the longer they stay engaged, and the more opportunities for monetization you offer, the more revenue you’ll generate. This is the real key to successfully monetizing your app—it’s not JUST about the dollar amount.

When you think about revenue in these terms, it becomes clear that you need to focus on more than just the acquisition stage of the customer journey. In fact, it can be helpful to think of this customer journey as a funnel:

At each step there's an opportunity for customers to make a choice—will they engage further? Or will they go somewhere else? It's natural that you'll lose some people along the way—not everyone is your ideal target audience, after all— is an important way to understand what's working with your app, and what isn’t.
 

303330

Another way to think about this user journey is to focus on four key touch points. 

30: The first 30 seconds

You need to hook them right away, so make sure your app icon and first screen are working for you. How easy it is to proceed to the next step? Be strategic about any barriers you place at this point.

3: The first 3 minutes

If you win in the first 30 seconds, you have about 3 minutes to get them more engaged. This will include their first real interaction with the experience, beyond downloading and logging in. Will they get a satisfying experience in that time frame, and be left wanting to come back?

3: Will they come back in 3 days?

Most apps have a precipitous decline after 3 days. The ones who come back in three days are the users most likely to keep using it.

30: Will they come back in 30 days?

These are your hardcore, fully dedicated users. They're totally hooked and are willing to commit time and potentially funds.
 

Which conversion points aren't converting?

Each time users make a decision to stay or go—that's a conversion point. Will they be converted into a paying customer? No app can claim 100% conversion, that's not the way it works, so don't beat yourself up for every person who doesn’t come back. However, there's often room for improvement—by taking a closer look at what's happening at each point, you can help reduce friction and encourage more people to go through the entire journey. Once you know where people are disengaging, you can turn your attention to potential growth drivers.

Here are some things to consider at each stage of the customer journey:

Awareness. The first step is to cast a wide net, and you may find that you just aren't starting with enough eyeballs. There are a lot of options for increasing awareness, depending on your particular app; consider paid advertising, viral campaigns, and one-click sharing. Do everything you can to increase awareness so that the top of your funnel is as wide as possible, allowing for the greatest possible return. 

Discovery. The second stage focuses on discovery of your app or product. Once people take a closer look, are they bailing? If so, revisit your app store description—is it accurate and compelling? Can you make it more exciting for users to read, based on keywords or listing strategy? Is the app icon professionally designed and intriguing? Ratings and reviews are also a critical factor at this point, so consider encouraging your active users to rate the app and write a review. You can set up a trigger to prompt ratings after a user has logged in a certain number of times. Check out this article on making the most of your Google Play store to see other ways to maximize discovery.

Consideration. This is where things move fast—the first 30 seconds a user engages with your app or service go by quickly, so it's good to think visually and deliver a strong first impression. What does the intro screen look like? What kind of impression does it make? Can a user understand everything they need to quickly? Use the first 30 seconds to spark their interest. Anything you can do to get them past the first few screens will help.

Conversion. This first-time user experience is the real hook—when they’ll decide if your app is worth keeping and using regularly. In the first three minutes, make sure you provide a compelling experience that leaves them wanting to play more. If it's too easy, they'll be done and have no reason to come back. If it's too complicated, they won't bother. If there will be a login or registration process, think of ways to make it as easy as possible, such as one-step registration or a free trial. This is where you want to focus on how quickly you can demonstrate value and delight your end user, How deep into the gameplay can they get? 

Retention. To encourage long-term retention, 30 days and beyond, think about how you can reward their loyalty and entice them further into the gameplay or engagement. Send out push notifications, implement a loyalty campaign, or even consider doing re-engagement advertising. You might be able to offer bonus content after they've played a certain number of games, or when they've reached a certain score. Consider marketing to them via a different channel so you can engage with the user over multiple streams (think email or Facebook). Make sure they know you value them.
 

Don't forget to look at the numbers!

Ultimately, you want to be open-minded about the results. Your intuition is an important guide, but it's also important to make sure that user behavior matches your assumptions, and adjust as necessary. The numbers can give you a great idea of how users are traveling through the funnel, and what effect changes have on their journey.

Hopefully this gives you a better idea of what people mean when they talk about the funnel, and how you might be able to use it to your advantage. Stay tuned for our upcoming article on Google Play Store optimization for a more in-depth look at how to get noticed in the crowded marketplace, and even how to handle bad reviews.

What's a good example of an app that really kills it in the first 30 seconds? Tell us in the comments!

Intel Software License Manager Installer Beta

$
0
0

The new Installer (beta version) will assist you in the installation of the Intel® Software License Manager.

Benefits

Currently you need to manually supply the Host ID (mac address) and Host Name of your server during the registration or your floating license serial number (SN). When the registration is complete you need to download and place the license file manually on your system.

This will no longer be required*. Simply register your SN in Intel® Registration Center (IRC) and proceed to the installation. During the installation the Installer will automatically get the required information from your system, update the information in IRC, generate the license file and place it on the system.

*Note: The new Installer requires Internet connection. If you intend to install the Intel® Software License Manger with no Internet connection you will need to generate the license file manually during registration and use it during the installation.

Coming soon

Two other aspects of License Server Manager Setup are not yet implemented:

  • Three servers redundancy
  • Merger of multiple SN into a single license file

You will still need to do the above steps manually at this time. But don’t worry, we are currently working on even more exciting improvements and those features will be integrated into future release of the Installer.

Have questions?

Check out Intel® FLEXlm* License Manager FAQ
For other licensing questions see Licensing FAQ
Or ask in our Intel® Software Development Products Download, Registration & Licensing forum

Intel® XDK FAQs - Crosswalk

$
0
0

How do I play audio with different playback rates?

Here is a code snippet that allows you to specify playback rate:

var myAudio = new Audio('/path/to/audio.mp3');
myAudio.play();
myAudio.playbackRate = 1.5;

Why are Intel XDK Android Crosswalk build files so large?

When your app is built with Crosswalk it will be a minimum of 15-18MB in size because it includes a complete web browser (the Crosswalk runtime or webview) for rendering your app instead of the built-in webview on the device. Despite the additional size, this is the preferred solution for Android, because the built-in webviews on the majority of Android devices are inconsistent and poorly performing.

See these articles for more information:

Why is the size of my installed app much larger than the apk for a Crosswalk application?

This is because the apk is a compressed image, so when installed it occupies more space due to being decompressed. Also, when your Crosswalk app starts running on your device it will create some data files for caching purposes which will increase the installed size of the application.

Why does my Android Crosswalk build fail with the com.google.playservices plugin?

The Intel XDK Crosswalk build system used with CLI 4.1.2 Crosswalk builds does not support the library project format that was introduced in the "com.google.playservices@21.0.0" plugin. Use "com.google.playservices@19.0.0" instead.

Why does my app fail to run on some devices?

There are some Android devices in which the GPU hardware/software subsystem does not work properly. This is typically due to poor design or improper validation by the manufacturer of that Android device. Your problem Android device probably falls under this category.

How do I stop "pull to refresh" from resetting and restarting my Crosswalk app?

See the code posted in this forum thread for a solution: /en-us/forums/topic/557191#comment-1827376.

An alternate solution is to add the following lines to your intelxdk.config.additions.xml file:

<!-- disable reset on vertical swipe down --><intelxdk:crosswalk xwalk-command-line="--disable-pull-to-refresh-effect" />

Which versions of Crosswalk are supported and why do you not support version X, Y or Z?

The specific versions of Crosswalk that are offered via the Intel XDK are based on what the Crosswalk project releases and the timing of those releases relative to Intel XDK build system updates. This is one of the reasons you do not see every version of Crosswalk supported by our Android-Crosswalk build system.

With the September, 2015 release of the Intel XDK, the method used to build embedded Android-Crosswalk versions changed to the "pluggable" webview Cordova build system. This new build system was implemented with the help of the Cordova project and became available with their release of the Android Cordova 4.0 framework (coincident with their Cordova CLI 5 release). With this change to the Android Cordova framework and the Cordova CLI build system, we can now more quickly adapt to new version releases of the Crosswalk project. Support for previous Crosswalk releases required updating a special build system that was forked from the Cordova Android project. This new "pluggable" webview build system means that the build system can now use the standard Cordova build system, because it now includes the Crosswalk library as a "pluggable" component.

The "old" method of building Android-Crosswalk APKs relied on a "forked" version of the Cordova Android framework, and is based on the Cordova Android 3.6.3 framework and is used when you select CLI 4.1.2 in the Project tab's build settings page. Only Crosswalk versions 7, 10, 11, 12 and 14 are supported by the Intel XDK when using this build setting.

Selecting CLI 5.1.1 in the build settings will generate a "pluggable" webview built app. A "pluggable" webview app (built with CLI 5.1.1) results in an app built with the Cordova Android 4.1.0 framework. As of the latest update to this FAQ, the CLI 5.1.1 build system supported Crosswalk 15. Future releases of the Intel XDK and the build system will support higher versions of Crosswalk and the Cordova Android framework.

In both cases, above, the net result (when performing an "embedded" build) will be two processor architecture-specific APKs: one for use on an x86 device and one for use on an ARM device. The version codes of those APKs are modified to insure that both can be uploaded to the Android store under the same app name, insuring that the appropriate APK is automatically delivered to the matching device (i.e., the x86 APK is delivered to Intel-based Android devices and the ARM APK is delivered to ARM-based Android devices).

For more information regarding Crosswalk and the Intel XDK, please review these documents:

How do I prevent my Crosswalk app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

How can I improve the performance of my Construct2 game build with Crosswalk?

Beginning with the Intel XDK CLI 5.1.1 build system you must add the --ignore-gpu-blacklist option to your intelxdk.config.additions.xml file if you want the additional performance this option provides to blacklisted devices. See this forum post for additional details.

If you are a Construct2 game developer, please read this blog by another Construct2 game developer regarding how to properly configure your game for proper Crosswalk performance > How to build optimized Intel XDK Crosswalk app properly?<

Also, you can experiment with the CrosswalkAnimatable option in your intelxdk.config.additions.xml file (details regarding the CrosswalkAnimatable option are available in this Crosswalk Project wiki post: Android SurfaceView vs TextureView).

<!-- Controls configuration of Crosswalk-Android "SurfaceView" or "TextureView" --><!-- Default is SurfaceView if >= CW15 and TextureView if <= CW14 --><!-- Option can only be used with Intel XDK CLI5+ build systems --><!-- SurfaceView is preferred, TextureView should only be used in special cases --><!-- Enable Crosswalk-Android TextureView by setting this option to true --><preference name="CrosswalkAnimatable" value="false" />

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for some additional tools that can be used to modify the Crosswalk's webview runtime parameters, especially the --ignore-gpu-blacklist option.

Why does the Google store refuse to publish my Crosswalk app?

For full details, please read Android and Crosswalk Cordova Version Code Issues. For a summary, read this FAQ.

There is a change to the version code handling by the Crosswalk and Android build systems based on Cordova CLI 5.0 and later. This change was implemented by the Apache Cordova project. This new version of Cordova CLI automatically modifies the android:versionCode when building for Crosswalk and Android. Because our CLI 5.1.1 build system is now more compatible with standard Cordova CLI, this change results in a discrepancy in the way your android:versionCode is handled when building for Crosswalk (15) or Android with CLI 5.1.1 when compared to building with CLI 4.1.2.

If you have never published an app to an Android store this change will have little or no impact on you. This change might affect attempts to side-load an app onto a device, in which case the simplest solution is to uninstall the previously side-loaded app before installing the new app.

Here's what Cordova CLI 5.1.1 (Cordova-Android 4.x) is doing with the android:versionCode number (which you specify in the App Version Code field within the Build Settings section of the Projects tab):

Cordova-Android 4.x (Intel XDK CLI 5.1.1 for Crosswalk or Android builds) does this:

  • multiplies your android:versionCode by 10

then, if you are doing a Crosswalk (15) build:

  • adds 2 to the android:versionCode for ARM builds
  • adds 4 to the android:versionCode for x86 builds

otherwise, if you are performing a standard Android build (non-Crosswalk):

  • adds 0 to the android:versionCode if the Minimum Android API is < 14
  • adds 8 to the android:versionCode if the Minimum Android API is 14-19
  • adds 9 to the android:versionCode if the Minimum Android API is > 19 (i.e., >= 20)

If you HAVE PUBLISHED a Crosswalk app to an Android store this change may impact your ability to publish a newer version of your app! In that case, if you are building for Crosswalk, add 6000 (six with three zeroes) to your existing App Version Code field in the Crosswalk Build Settings section of the Projects tab. If you have only published standard Android apps in the past and are still publishing only standard Android apps you should not have to make any changes to the App Version Code field in the Android Builds Settings section of the Projects tab.

The workaround described above only applies to Crosswalk CLI 5.1.1 and later builds!

When you build a Crosswalk app with CLI 4.1.2 (which uses Cordova-Android 3.6) you will get the old Intel XDK behavior where: 60000 and 20000 (six with four zeros and two with four zeroes) are added to the android:versionCode for Crosswalk builds and no change is made to the android:versionCode for standard Android builds.

NOTE:

  • Android API 14 corresponds to Android 4.0
  • Android API 19 corresponds to Android 4.4
  • Android API 20 corresponds to Android 5.0
  • CLI 5.1.1 (Cordova-Android 4.x) does not allow building for Android 2.x or Android 3.x

Why is my Crosswalk app generating errno 12 Out of memory errors on some devices?

If you are using the WebGL 2D canvas APIs and your app crashes on some devices because you added the --ignore-gpu-blacklist flag to your intelxdk.config.additions.xml file, you may need to also add the --disable-accelerated-2d-canvas flag. Using the --ignore-gpu-blacklist flag enables the use of the GPU in some problem devices, but can then result in problems with some GPUs that are not blacklisted. The --disable-accelerated-2d-canvas flag allows those non-blacklisted devices to operate properly in the presence of WebGL 2D canvas APIs and the --ignore-gpu-blacklist flag.

You likely have this problem if your app crashes after running a few seconds with the an error like the following:

<gsl_ldd_control:364>: ioctl fd 46 code 0xc00c092f (IOCTL_KGSL_GPMEM_ALLOC) failed: errno 12 Out of memory <ioctl_kgsl_sharedmem_alloc:1176>: ioctl_kgsl_sharedmem_alloc: FATAL ERROR : (null).

See Chromium Command-Line Options for Crosswalk Builds with the Intel XDK for additional info regarding the --ignore-gpu-blacklist flag and other Chromium option flags.

Construct2 Tutorial: How to use AdMob and IAP plugins with Crosswalk and the Intel XDK.

See this tutorial on the Scirra tutorials site > How to use AdMob and IAP official plugins on Android-Crosswalk/XDK < written by Construct2 developer Kyatric.

Also, see this blog written by a Construct2 game developer regarding how to build a Construct2 app using the Appodeal ad plugin with your Construct2 app and the Intel XDK > How to fix the build error with Intel XDK and Appodeal? <.

What is the correct "Target Android API" value that I should use when building for Crosswalk on Android?

The "Target Android API" value (aka android-targetSdkVersion), found in the Build Settings section of the Projects tab, is the version of Android that your app and the libraries associated with your app are tested against, it DOES NOT represent the maximum level of Android onto which you can install and run your app. When building a Crosswalk app you should set to this value to that value recommend by the Crosswalk project.

The recommended "Target Android API" levels for Crosswalk on Android apps are:

  • 18 for Crosswalk 1 thru Crosswalk 4
  • 19 for Crosswalk 5 thru Crosswalk 10
  • 21 for Crosswalk 11 thru Crosswalk 18

As of release 3088 of the Intel XDK, the recommended value for your android-targetSdkVersion is 21. In previous versions of the Intel XDK the recommended value was 19. If you have it set to a higher number (such as 23), we recommend that you change your setting to 21.

Can I build my app with a version of Crosswalk that is not listed in the Intel XDK Build Settings UI?

As of release 3088 of the Intel XDK, it is possible to build your Crosswalk for Android app using versions of the Crosswalk library that are not listed in the Project tab's Build Settings section. You can override the value that is selected in the Build Settings UI by adding a line to the intelxdk.config.additions.xml file.

NOTE: The process described below is for experts only! By using this process you are effectively disabling the Crosswalk version that is selected in the Build Settings UI and you are overriding the version of Crosswalk that will be used when you build a custom debug module with the Debug tab.

When building a Crosswalk for Android application, with CLI 5.x and higher, the Cordova Crosswalk Webview Plugin is used to facilitate adding the Crosswalk webview library to the build package (the APK). That plugin effectively "includes" the specified Crosswalk library when the app is built. The version of the Crosswalk library selected in the Build Settings UI is effected by a line in the Android build config file, similar to the following:

<intelxdk:crosswalk version="16"/>

The line above is added automatically to the intelxdk.config.android.xml file by the Intel XDK. If you attempt to change lines in the Android build config file they will be overwritten by the Intel XDK each time you use the Build tab (perform a build) or the Test tab. In order to modify (or override) this line in the Android config file you need to add a line to the intelxdk.config.additions.xml file.

The precise line you include in the intelxdk.config.additions.xml file depends on the version of the Crosswalk library you want to include. 

<!-- Set the Crosswalk embedded library to something other than those listed in the UI. --><!-- In practice use only one, multiple examples are shown for illustration. --><preference name="xwalkVersion" value="17+"/><preference name="xwalkVersion" value="14.43.343.24" /><preference name="xwalkVersion" value="org.xwalk:xwalk_core_library_beta:18+"/>

The first example line in the code snippet above asks the Intel XDK to build with the "last" or "latest" version of the Crosswalk 17 release library (the '+' character means "last available" for the specified version). The second example requests an explicit version of Crosswalk 14 when building the app (e.g., version 14.43.343.24). The third example shows how to request the "latest" version of Crosswalk 18 from the Crosswalk beta Maven repository.

NOTE: only one such "xwalkVersion" preference tag should be used. If you include more than one "xwalkVersion" only the last one specified in the intelxdk.config.additions.xml file will be used.

The specific versions of Crosswalk that you can use can be determined by reviewing the Crosswalk Maven repositories: one for released Crosswalk libraries and one for beta versions of the Crosswalk library.

Not all Crosswalk libraries are guaranteed to work with your built app, especially the beta versions of the Crosswalk library. There may be library dependencies on the specific version of the Cordova Crosswalk Webview Plugin or the Cordova-Android framework. If a library does not work, select a different version.

Detailed instructions on the preference tag being used here are available in the Crosswalk Webview Plugin README.md documentation.

If you are curious when a specific version of Chromium will be supported by Crosswalk, please see the Crosswalk Release Dates wiki published by the Crosswalk Project.

My Construct2 Crosswalk app flashes a white box or white band after the splash screen.

The white box or white bands you see between the ending of the splash screen and the beginning of your app appears to be due to some webview initialization. It also appears in non-Crosswalk apps on Android, but does not show up as white. The white band that does appear can cause an initial "100% image" to bounce up and down momentarily. This issue is not being caused by the splash screen plugin or the Intel XDK; it appears to be interference caused by the Cordova webview initialization.

The following solution appears to work, although there may be some situations that it does not help. As this problem is better understood more information will be provided in this FAQ.

Add the following lines to your intelxdk.config.additions.xml file:

<platform name="android"><!-- set Crosswalk default background color --><!-- see http://developer.android.com/reference/android/graphics/Color.html --><preference name="BackgroundColor" value="0x00000000" /></platform>

The value 0x00000000 configures the webview background color to be "transparent black," according to the Cordova documentation and the Crosswalk webview plugin code. You should be able to set that color to anything you want. However, this color appears to work the best.

You may also want to add the following to your intelxdk.config.additions.xml file:

<platform name="android"><!-- following requires the splash screen plugin --><!-- see https://github.com/apache/cordova-plugin-splashscreen for details --><preference name="SplashScreen" value="screen" /><preference name="AutoHideSplashScreen" value="false" /><!-- <preference name="SplashScreenDelay" value="30000" /> --><preference name="FadeSplashScreen" value="false"/><!-- <preference name="FadeSplashScreenDuration" value="3000"/> --><preference name="ShowSplashScreenSpinner" value="false"/><preference name="SplashMaintainAspectRatio" value="false" /><preference name="SplashShowOnlyFirstTime" value="false" /></platform>

Testing of this fix was done with Crosswalk 17 on an Android 4.4, Android 5.0 and an Android 6.0 device.

Back to FAQs Main

Intel® Energy Profiler 2016 Update 3

Previous Intel® RealSense™ Install Software

$
0
0

Earlier version of the Depth Camera Managers (DCM) and Intel® RealSense™ Runtime Distributable. 

DCM - F200

Note: Most version include a camera firmware update. 

VersionBuildSDK CompatibilitySizeRelease NotesF200 DCM
1.21.2.14.24922R2, R3, R4, R5, 2016 R151 MBRelease NotesDownload Now
1.31.3.20.55679R2, R3, R4, R5, 2016 R156 MBRelease NotesDownload Now
1.4 HF 21.4.27.32425R2, R3, R4, R5, 2016 R194 MBRelease NotesDownload Now
1.4 HF31.4.27.41944R2, R3, R4, R5, 2016 R192 MBRelease NotesDownload Now

 

DCM - SR300

VersionBuildSDK CompatibilitySizeRelease NotesSR300 DCM
3.03.0.24.59748R5, 2016 R1103 MBRelease NotesDownload Now

 

DCM- R200

VersionBuildSDK CompatibilitySizeRelease NotesR200 DCM
Beta 22.0.3.2548R3, R4, R5, 2016 R170 MBRelease NotesDownload Now
Gold2.0.3.39488R3, R4, R5, 2016 R174 MBRelease NotesDownload Now
Gold HF22.0.3.53109R3, R4, R5, 2016 R173 MBRelease NotesDownload Now
Gold 2.1 HF12.1.24.6664R3, R4, R5, 2016 R1110 MBRelease NotesDownload Now

 

Runtime Distributable

Note: All app installers must include the runtime that matches the version of the SDK build. 

Download size before voice component. 

VersionBuildSDK CompatibilitySizeRelease NotesRuntime
R24.0.0.112526R2410 MBRelease NotesDownload Now
R35.0.3.187777R3455 MBRelease NotesDownload Now
R46.0.21.6598R4558 MBRelease NotesDownload Now
R57.0.23.8048R5516 MBRelease NotesDownload Now

Transferring a License to Another Person

$
0
0

You can transfer a named-user license to/from another person only if both of you share the same domain.

Transferring a license to another person [Push]

License transfer to another person can be done for both named-user and floating licenses.

  1. Log-in to the Intel® Registration Center (IRC) by entering your login ID and password. You will see a list of all your products.
  2. Click the product you wish to transfer, to go to the subscription history page. [Note, you may have more than one SN for that product, so make sure to choose the one you would like to transfer]. Click the Manage link.
  3. In the Manage Serial number page expend the Serial Number User Management section and click the Transfer button under the Action column.

  4. In the New User Email text box enter the email address of the person you wish to transfer the license to and click Transfer. Be advised, that person has to share the same domain as you. However, it cannot be a public domain such as Gmail, Yahoo, etc.

Transferring a license from another person [Pull]

You may need to pull a license from a person who left your company and as such is unable to transfer the license to you. License transfer from another person can be done only for named-user licenses.

  1. Log-in to the Intel® Registration Center (IRC) by entering your login ID and password. On the right-hand side enter the SN you wish to pull and click Register.

  2. If the SN belong to the same domain as yours you will need to confirm the transfer in the Take Ownership page

  3. Once the license is transferred, you will be taken to the product’s download where you can download and install it.
  4. If you enter a SN that is not from the same domain as yours you will get an error message.

If you wish to transfer the license to a user from a different domain you will need to contact Intel support. If you have an Intel Premier Support (IPS) account, you can submit a request against your product. If you don’t have an IPS account submit your request to Intel Software Development Products Download, Registration, and Licensing forum. Please do not disclose your serial number or other personal information in your thread. Simply start a new discussion thread and state that you need to transfer a license. The Intel support engineer will request you to send a private message with necessary private information to complete the transfer. Private messages are only visible to Intel support engineers and the involved external party.

Have Questions?

Check out the Licensing FAQ
Or ask* in our Intel® Software Development Products Download, Registration & Licensing forum

* If you have a question be sure to start a new forum thread.

Using ACUWizard for Self-discovery of Configuration Paths

$
0
0

What is the ACUWizard Tool

The ACUWizard is a recognized tool that is used to enable and configure an Intel® Active Management Technology (Intel® AMT) capable device. The tool is included as part of the Intel® Setup and Configuration Software (Intel® SCS) download. While the tool comes with documentation, it may not be clear to IT professionals when specific options should be used or what benefits or drawbacks are associated with those options.

There are three main reasons to use the ACUWizard:

  • You need to configure an Intel AMT device that does not have a Management Console that supports a configuration of any type.
  • The console does not support remote configuration into Admin Control Mode meaning that you will need to implement the USB configuration option.
  • You need to perform self-discovery of the configuration process.

The next sections describe the following:

  • OS-based configuration versus USB key-based configuration
  • Steps for using ACU Wizard to configure an Intel AMT Client via the OS-based method
  • Steps for using ACU Wizard to configure an Intel AMT Client via the USB key-based method

Configuration Methods Using the ACUWizard: OS-based method versus USB key-based method

Configuration can be performed from within the OS or via a USB key. For the OS-Based Configuration Microsoft Windows* 7 or higher and the LMS service is required and will provision the system into Client Control mode (CCM).

  • Single system configuration. This method is easy to do and can range from a simple configuration to more advanced configurations. This is easy to replicate but time consuming if you need to configure many Intel AMT Clients.
  • Multiple system configuration. This method is scriptable via the command line and is a popular option in environments containing many Intel AMT Clients.

The USB key-based configuration method is designed to use a USB key to push the configuration profile into the Intel® MEBX during a reboot. It is potentially much quicker than an OS-based configuration and has the added capability of configuring the device into Admin Control Mode (ACM). The USB configuration is not supported on Intel AMT 10+ LAN-less devices.

The USB configuration requires a setup.bin file. There are two tools for creating setup.bin. The first tool uses the acuwizard.exe, and the second tool uses acuconfig.exe. ACUConfig is a command-line tool and is somewhat cumbersome, so I won't be going into detail about it in this article.

  • Single use system configuration key. A key is generated specifically for a client and can be used only once. This type of profile is necessary only if the OS has a static IP, but DHCP enabled can be supported as well.
  • Multi-use system configuration key. A single configuration file is created to configure multiple devices. But the systems will have the same password, and the key assumes the device is DHCP-enabled. If a static OS client is configured in this manner, the system will in effect have two IP addresses.

A quick note on passwords: There are three basic passwords used with configurable Intel AMT devices:

  • MEBx password. This is your physical access password into the Intel® Management Engine BIOS Extension (Intel® MEBX). By default the USB configuration will set this to be the same as the Intel AMT password. The password rule for this is max 32 characters and complex. The default password is admin.
  • Intel AMT password. This is the remote management password and is set using all versions of the configuration discussed in this blog. The password rule for this is max 32 characters and complex.
  • RFB5900. This is not required, however it is important to note if the plan is to use a standard VNC viewer to make a local connection with Intel AMT KVM, the RFB password must be set. The password rule is exactly eight characters and complex.

Steps for using the ACU Wizard to configure an Intel® Active Management Technology client via the OS-based method

Single-System Configuration

Perform an OS-based configuration by launching ACUWizard as Admin. Once it’s launched follow these steps:

  1. Create the profile by opening the ACUWizard, and then selecting Configure/Unconfigure this System.
    Configuration Methods
    Figure 1.Configuration Methods
     
  2. Select Configure via Windows.
  3. Select Next.
  4. In the Intel® AMT Configuration Utility – select Configure via Windows and do the following:
    • In Current Password, type a password. This is the password for the Intel® MEBX, if the password has not been changed, the default password is admin.
    • Fill in New Password and Confirm Password.
      Example of Configure via Windows
      Figure 2.Example of Configure via Windows
       
    • Select Override Default Settings, and then click Network Settings.
      • If OS is set as DHCP enabled, verify the settings. Typical settings are:
        • Use the Following as FQDN – Select Host Name.
        • Select the Shared FQDN option.
        • Select Get IP from DHCP Server.
        • Update the DNS directly or via DHCP option 81.
        • Select OK.
      • If the OS IP is static, select the Change the IP section radio button and then select Use the same IP as the host.
      • Select Next.
        Example of Network Settings
        Figure 3.Example of Network Settings
  5. The software saves the profile for potential future use. Enter and confirm the Encryption Password.
  6. Select Configure.
  7. Configuring your System Dialog box launches. Wait until it closes, which can take a few minutes.
  8. Screen should now show Configuration Complete, select finish.

Multiple System Configuration

Configuring Intel AMT devices using this method requires the use of two tools: ACUWizard.exe and ACUConfig.exe. The first step is to create a profile with the ACUWizard and then push the profile to the client with the ACUConfig tool. The following is an example of a basic profile; advanced profiles are beyond the scope of this blog. See Figures 1-3 for examples of what options are available in the ACUWizard’s GUI.

Note:This is a scriptable solution.

  1. Create the profile by opening the ACUWizard, and then selecting Create Settings to configure Multiple Systems (See Figure 1.)
  2. In the AMT Configuration Utility: Profile Designer window, select the green plus sign New.
    Example of Green Pus sign
    Figure 4.Example of Green Plus sign
     
  3. In the Configuration Profile Wizard, select Next.
  4. In the Configuration Profile Wizard Optional Settings window, select Next.
  5. In the Configuration Profile Wizard System Settings window:
    • Enter the RFB password if it is being used.
    • Enter the password in the Use the following password for all systems data field:
    • Select the Set button for Edit and FQDN.
    • There will be no changes, but note the changes required if a device has a static OS IP address.
    • Select Cancel.
    • Select Next.
      Example of Available Feature Settings
      Figure 5.Example of Available Feature Settings
  6. In the Configuration Profile Wizard - "Finished" window:
    • Enter the Profile Name you want to use.
    • Encrypt the xml file by adding and confirming the password.
    • Select Finish.
      Profile Naming and Encryption Example
      Figure 6.Profile Naming and Encryption Example
  7. In the Intel AMT Configuration Utility: Profile Designer window:
    • a. Take note of the Profile Path shown on your screen. It should be something like <userName>\documents\SCS_Profile.
    • b. Close ACU Wizard.

At this point, steps 1 through 7 above are a one-time process per each custom profile needed. The following steps are to be repeated on each client.

  1. Copy the previously created profile and paste it in the configurator folder of the Intel SCS download.
  2. Copy the configurator folder to a location accessible to the Intel AMT Client (Local, Network share, USB thumb drive, and so on).
  3. Open a command prompt as admin, and run the following string: acuconfig.exe configamt <profile.xml>
  4. You should exit with code 0 for a successful configuration.

Steps for using ACUWizard to configure an Intel AMT Client via the USB Configuration

Creating a USB Key for configuration is a three-step process: Create a configuration profile, format a USB Key (Fat32), and save the profile to the USB key as setup.bin.

The profile can be created in two ways: as a single use key or a multiple use key.

Single-Use Key

This method creates a single use key that can't be reused without creating a new setup.bin file. You can keep the Intel AMT IP address the same as the OS IP address if it is statically configured. This key should only be created on the device that the finished USB key is going to configure. Figure 4 provides an example of what options are available for the Single-Use Key method.

To create the USB file setup.bin:

  1. Create the profile by opening the ACUWizard, and then selecting Configure/ Unconfigure this System. (See Figure 1.)
  2. In the Intel AMT Configuration Utility - Configuration Options window:
    • Select Configure via USB Key.
    • Select Next.
  3. In the Intel AMT Configuration Utility - Configure via USB Key window:
    • Fill in Current Password. This is the password for the Intel® MEBX. The default password is "admin" if the password has not been changed,
    • Fill in New Password and Confirm Password.
  4. Select Display advanced settings
    • IS OS IP address is DHCP enabled,, verify that the checkbox for DHCP Enabled is checked.
    • If OS IP address is static, uncheck the DHCP Enabled checkbox and provide the Network address information.
  5. Select Next.
    Example of USB Key Configuration GUI
    Figure 7.Example of USB Key Configuration GUI
     
  6. In the Intel AMT Configuration Utility – Then Create Configuration USB Key window:
    • Specify the appropriate USB Drive in the selection window.
    • Select OK.
    • In the Formatting USB Drive window:
      • Select Yes to format the drive. In the Configuration USB Key Created Successfully dialog box, click OK.
  7. The USB key is now successfully configured

Multi-Use Key

This method creates a single multi-use key that can be reused without creating a new setup.bin file. This method allows for quick configuration over multiple devices. However, the configuration file is made specifically for DHCP-enabled or Static IP-assigned operating systems. Using the wrong key causes a mismatch between the OS (static) and Intel AMT (DHCP-enabled) IP addresses. This is not necessarily wrong, but it requires tracking multiple IPs for the same physical device, causing more management requirements. Figure 9, below provides an example of what the GUI looks like for performing the Multi-Use Key method.

To create the USB file - setup.bin:

  1. Open the ACU Wizard and then select the Create Settings to configure Multiple Systems. See (See Figure 1.)
  2. In Intel AMT Configuration Utility: Profile Designer window:
    • Select the Tools button in the upper-right corner.
      Example of tools button
      Figure 8.Example of tools button
       
    • Select Prepare a USB for Manual Configuration.
  3. In the Settings for Manual Configuration of Multiple Systems window:
    • Select Mobile Systems or Desktop Systems.
      Note:Choosing the wrong device setting will trigger an error about applying power policy. The configuration will be successful; however, the firmware defaults to “Intel® AMT Always On in (s0-s5)” and DHCP-enabled.
    • Select Intel AMT Version level 6+ or 7+.
    • Enter passwords:
      • Old MEBx Password: If the password has not been changed, the default password will be admin.
      • New Password and confirm: The password must be complex and up to 32 characters.
    • Specify the system Power State – select Always On (s0-s5)
    • User Consent Required - Leave unchecked
      Note:With Intel AMT 11, a change was made that defaults User Consent to be KVM only. You can modify this post-configuration via the WS-Management command or through an existing tool such as Mesh Commander.
    • Specify the appropriate USB drive in the selection window.
    • Select OK.
      Example of USB Key Configurable Options
      Figure 9.Example of USB Key Configurable Options
  4. In the Formatting USB Drive window:
    • Select Yes to format the drive. In the Configuration USB Key Created Successfully dialog box, select OK to finish the configuration.
  5. The USB key is now successfully configured.

How to use the Configuration USB Key

Now that the key has been created, we need to use it to configure the Intel AMT device. Just insert the USB Key into the Intel AMT device and reboot the system. During reboot, the device will detect the setup.bin file and a message should display asking whether you want to configure the device. Select” Y” for yes and a few seconds later, hit enter at the success screen.

A few things to note in regards to the USB key; don’t use drives over 32 gig, formatted for FAT32, USB configuration is occasionally disabled in the BIOS thus requiring activation and if a USB key fails to work try a different model or brand.

Additional Resources

Summary

There are a lot of options and reasons for using the ACUWizard tool and it will all depend on your specific environmental requirements. The ACUWizard tool is designed to exercise the full range of features regardless of which method is used. There is not one “correct” way to do configuration as all options are valid, but determining the method that will work in your environment is the essential element.

About the Author

Joe Oster has been active at Intel around Intel® vPro™ Technology and Intel AMT since 2006. He is passionate about technology and is an advocate for the Managed Service Provider and Small/Medium business channels. When not working, he enjoys being a Dad and spends time working on his family farm or flying Drones and RC Aircraft.

Intel® Performance Snapshot Feedback

$
0
0

Help us improve Intel® Performance Snapshot

The Feedback form allows you to send your written feedback to Intel to help us improve the Intel® Performance Snapshot Product. Although the Feedback form doesn’t intentionally collect information that identifies you as an individual, it is possible that such information might be captured in the feedback you choose to provide. If this is the case, Intel does not use this information to identify or contact you or other individuals.

Before sending your feedback, you also have the option to provide Intel with your email address. Intel will only use your email address to contact you in case there are questions about your feedback.

Information Collected, Processed, or Transmitted:

The Feedback form only collects the information you type in the Feedback form. This information is sent when you choose to send feedback. The Feedback feature does not collect or send any other information. Your feedback and (optional) email address are not associated with any data collected about the installation, setup and use of Intel® Performance Snapshot.

To learn more about Intel’s privacy practices, please visit http://www.intel.com/privacy.

Smart Glasses to Help the Blind, With Pivothead LiveModPro and Intel Edison

$
0
0

hello

You want to do cool computer vision tricks with Intel Edison ? Yes, but why not work on a project to help the blind and put your coding skills to good use !

My project was to perform basic but robust computer vision tricks to help a blind person. Tricks such as :

  • detect barcodes with zbar, get the description from a local source or from internet databases, read the text with epseak
  • detect colors at the center of the frame, find the html and pantone names, read the name
  • find colors that would look good with this color, to select matching clothes, and read them
  • find faces in front of you, say where they are compared to you and how far. read the text in stereo to give spatial information on where the face is located

First step is to pick a camera. Pivothead SMART glasses were an obvious choice. Image quality is awesome, hardware codec too, there's cheap and super light. Turns out they have an Intel Edison extension card called "LiveModPro" to do computer vision in the glasses, on battery. Perfect for my project.

Check the video of the demo on youtube

Steps :

  • flash edison with build v3
  • setup the board with internet access
  • install repositories from repo.opkg.netupgrade node, but not all the packages (kernel would not work well if you do)
  • install packages : opkg install fswebcam nano espeak ffmpeg-x264-presets gps-utils htop git lighttpd ofono opencv opencv-dev opencv-staticdev opencv-apps opencv-dev opencv-samples opencv-samples-dev python-opencv python-pip python-numpy zbar mjpg-streamer gstreamer1.0-plugins-good-interleave gstreamer1.0-plugins-good-audiofx
  • install node packages : npm install -g fs sleep tinycolor2 array-unique striptags color-namer color-scheme onecolor util request shelljs-nodecli linux-input-device canvas okrabyte ocra.js
  • unpack the source from http://dl.free.fr/l31NwuXWp
  • setup webserver on port 81, and root folder as /home/root/www/ by editing the file /etc/lighttpd.conf
  • test the webcam :fswebcam -d /dev/video0 /home/root/www/shot.png and use your browser to see the file http://EDISON_IP:81/shot.png
  • optional : setup a bluetooth headset or use headphones connected to the jack of the glasses

Then :

  • go to pivothead-intel and launch one of the demos, with a command like : node demo_barcode.js
  • you can hear the messages, or go in your web browser to see the debug interface (frame grab, faces detected, audio messages, ...)

Enjoy the video demo video and code comments !

 

Cordova Whitelisting with Intel® XDK for AJAX and Launching External Apps

$
0
0

Cordova CLI 5.1.1 and Higher

Starting with Apache* Cordova* CLI 5.1, the whitelisting security model that restricts and permits access to other domains from the app has changed. It is recommended that before you move your app to production you should provide a whitelist of the domains that you want your app to have access to.

Android

Starting with Cordova Android 4.0, your Android app's security policy is managed through a Whitelist Plugin and standard W3C Content Security Policy (CSP) directives. The Android Cordova whitelist plugin understands three distinct whitelist tags:

  1. <access> tag for Network Requests
  2. <allow-intent> tag for Intent Requets
  3. <allow-navigation> for Navigation

CSP directives are set by including a meta-tag in the <head> section of your index.html file. An Introduction to Content Security Policy is a good place to go to understand how to configure and apply these whitelist rules to your app. The CSP Playground is also a very useful site for learning about CSP and validating your CSP rules.

iOS

Unlike Android, your Cordova iOS app's whitelist security policy is managed directly by the cordova-ios framework. Cordova iOS versions prior to 4.0 used only the W3C Widget Access specification for domain whitelisting (i.e., the <access> tag). Starting with Cordova iOS 4.0, your Cordova iOS app's whitelist uses the <access> tag, as before, and adds support for two additional tags: <allow-intent> and <allow-navigation> as described in the Whitelist Plugin.

Starting with iOS 9, a scheme called Application Transport Security (ATS) is used to implement whitelist rules. Cordova automatically converts your <access> and <allow-navigation> tags to their equivalent ATS directives. When used with iOS apps, the <access> and <allow-navigation> tags support two new attributes for extra security for a domain whose security attributes you have control over. They have their equivalents in ATS:

  1. minimum-tls-version
  2. requires-forward-secrecy

See the ATS Technote for more details.

Windows

On Windows platforms, Cordova continues to use the W3C Widget Access specification to enforce domain whitelisting, which is built into the Cordova Windows framework.

See the following section for information regarding CSP directives and the Windows platforms.

Content Security Policy (CSP)

CSP is managed by the webview runtime (the builtin web runtime on which your Cordova app executes). Network requests include such actions as retrieving images from a remote server, performing AJAX requests (XHR), etc. CSP controls are specified in a single meta tag in your html files. Most Cordova apps are single-page apps, meaning they have only a single index.html file. If your app contains multiple html files, it is recommended that you use CSP <meta> tag on all of your pages.

Android version 4.4 (KitKat) and above supports the use of CSP (the Android 4.4 native webview is based on Chromium 30). If you are using the Android Crosswalk webview, CSP is supported on Android version 4.0 (Ice Cream Sandwich) and later (the Crosswalk webviews are also based on Chromium).

Apple iOS 7.1 and later supports the use of CSP directives (Apple iOS devices run on the Safari webview).

Windows Phone 8.x devices provide partial support via the X-Content-Security-Policy directive (Windows Phone 8.x devices run on the IE10 and IE11 mobile webviews). Windows 10 devices include full support for standard CSP directives (Windows Phone 10 and Windows 10 tablets run on the Edge webview).

It is recommended that you use CSP whenever possible!!

To get started with CSP, you can include the following overly permissive directive in the <head> section of your index.html file:

<meta http-equiv="Content-Security-Policy" content="default-src 'self''unsafe-eval' data: blob: filesystem: ws: gap: cdvfile: https://ssl.gstatic.com *; style-src * 'unsafe-inline'; script-src * 'unsafe-inline''unsafe-eval'; img-src * data: 'unsafe-inline'; connect-src * 'unsafe-inline'; child-src *; ">

There is no single CSP directive that can be recommended for all applications. The correct CSP directive is the one that provides the access you need while simultaneously insuring the protection necessary to keep your app from being compromised and exposing customer or user data.

This StackOverflow post is very helpful to read as an introduction to how Content Security Policy rules work.

Intel XDK 3088 and Higher

Starting with Intel XDK version 3088, the UI provided to specify whitelist entries has changed to accommodate changes in Cordova whitelist rules. Please read the rest of this document to understand how to specify whitelist entries in the Intel XDK.

Network Request Whitelist (<access>):

Network Request controls which network requests, such as content fetching or AJAX (XHR), are allowed to be made from within the app. For those webviews that support CSP, it is recommended that you use CSP. This whitelist entry is intended for older webviews that do not support CSP.

These whitelist specifications are defined in a Cordova CLI config.xml file using the <access origin> tag. Within the Intel XDK UI you specify your URLs in the Build Settings section of the Projects tab. For example, to specify http://mywebsite.com as a whitelisted URL:

Networkwhitelist5.4.1

By default, only requests to file:// URLs are allowed, but Cordova applications by default include access to all website. It is recommended that you provide your whitelist before publishing your app.

Intent Whitelist (<allow-intent>):

The intent whitelist controls which URLs the app is allowed to ask the system (ie., the webview) to open. By default, no external URLs are allowed. This applies to inline hyperlinks and calls to the window.open() function (note, if you are using the inAppBrowser it may change the behavior of window.open(), especially regarding whitelist rules). You app can open "hyperlinks" like a browser (for http:// and https:// URLs) and can "open" other apps via hyperlinks, such as the phone, sms, email, maps etc. 

To allow your app to launch external apps through a URL or via window.open(), specify your rules in the Build Settings section of the Projects tab. 

Navigation Whitelist (<allow-navigation>):

The navigation whitelist rules control which URLs the application webview can be navigated to. Only top level navigations are allowed, with the exception of Android, where it also applies to iframes for non-http(s) schemes. By default, you can only navigate to file:// URLs.

Additional Whitelist Settings for iOS ATS:

The UI whitelist settings for iOS are similar to those described above, with the addition of an ATS setting. When you click the "Edit ATS settings" link you can specify ATS settings for the Network Request and Navigation whitelist rules on your iOS 9 device. ATS settings do not apply to iOS 8 and earlier devices.

Most users should not have to change the ATS settings and can use the default values. For more details about ATS you can read this tutsplus.com article or search the web for additional articles.

The ATS settings dialog looks like this:

Windows Platform Whitelist Rules:

Windows platforms use the W3C Widget Access for whitelisting (that is, the <access> tag). Windows 10 also supports the <allow-navigation> tag. The rules for those tags are consistent with those described above. The Windows platforms also support CSP whitelist rules, which were described in the CSP section above.

Intel XDK versions prior to 3088:

Navigation Whitelist :

Navigation Whitelist controls which URLs the WebView can be navigated to. (Only top level navigations are allowed, with the exception,for Android it applies to iFrames also for non-http(s) schemes.) By default, you can only navigate to file:// URLs. To allow other URLS,  <allow-navigation> tag is used in config.xml file. With the Intel® XDK you need not specify this in config.xml, the Intel XDK automatically generates config.xml from the Build settings.

In the Intel® XDK you specify the URL that you would like the WebView to be navigated to under Build Settings > Android > Cordova CLI 5.1.1 > Whitelist > Cordova Whitelist > Navigation. For example : http://google.com

CLI5.1.1AndroidNavigation.png

Intent Whitelist:

Intent Whitelist controls which URLs the app is allowed to ask the system to open. By default, no external URLs are allowed. This applies to only hyperlinks and calls to window.open(). App can open a browser (for http:// and https”// URLs)  or other apps like phone, sms, email, maps etc. To allow app to launch external apps through URL or launch inAppBrowser through window.open(), <allow-intent> tag is used in config.xml, but again you need not specify this in config.xml, the Intel® XDK takes care of it through Build settings. 

In the Intel® XDK specify the URL you want to whitelist for external applications under Build Settings > Android > Cordova CLI  5.1.1 > Whitelist > Cordova Whitelist > Intent. For example http://example.com or tel:* or sms:*

CLI5.1.1AndroidIntent.png

Network Request Whitelist:

Network Request Whitelist controls, which network requests, such as content fetching or AJAX (XHR) etc. are allowed to be made from within the app. For the web views that support CSP,  it is recommended that you use CSP. This whitelist is for the older WebViews that do not support CSP.  This whitelist is defined in the config.xml using <access origin> tag, but once again in Intel® XDK you provide the URL under Build Settings > Android > Cordova CLI 5.1.1 > Whitelist > Cordova Whitelist > Network Request.  For example: http://mywebsite.com

By default, only request to file”// URLs are allowed, but Cordova applications by default include access to all website. It is recommended that you provide your whitelist before publishing your app.

CLI5.1.1AndroidNetwork.png

Content Security Policy:

Content Security Policy controls, which network requests such as images, AJAX requests (XHR) etc. are allowed to be made via WebView directly. This is specified through meta tags in your html file. It is recommended that you use CSP <meta> tag on all of your pages. Android KitKat onwards supports CSP, but Crosswalk web view supports CSP on all android versions.

For example include this in your index.html file.

<meta http-equiv="Content-Security-Policy" content="default-src 'self' data: gap: cdvfile: https://ssl.gstatic.com; style-src 'self''unsafe-inline'; media-src *">

iOS W3CWidgetAcess CLI 5.1.1

For Microsoft Windows* platforms also, W3C Widget Access standards are used and the build settings for whitelisting are as follows.

iOS W3CWidgetAcess CLI 5.1.1

Cordova CLI 4.1.2

Cordova CLI 4.1.2 is no longer supported by the Intel XDK. Please update your project to use CLI 5.1.1 or later.

 

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>