Quantcast
Channel: Intel Developer Zone Articles
Viewing all 3384 articles
Browse latest View live

API without Secrets: Introduction to Vulkan* Part 3

$
0
0

Download [PDF 885 KB]

Go back to: API without Secrets: Introduction to Vulkan* Part 2: Swap chain

Table of Contents

<< Back to Part 2

 

Tutorial 3: First Triangle – Graphics Pipeline and Drawing

In this tutorial we will finally draw something on the screen. One single triangle should be just fine for our first Vulkan-generated “image.”

The graphics pipeline and drawing in general require lots of preparations in Vulkan (in the form of filling many structures with even more different fields). There are potentially many places where we can make mistakes, and in Vulkan, even simple mistakes may lead to the application not working as expected, displaying just a blank screen, and leaving us wondering what went wrong. In such situations validation layers can help us a lot. But I didn’t want to dive into too many different aspects and the specifics of the Vulkan API. So I prepared the code to be as small and as simple as possible.

This led me to create an application that is working properly and displays a simple triangle the way I expected, but it also uses mechanics that are not recommended, not flexible, and also probably not too efficient (though correct). I don’t want to teach solutions that aren’t recommended, but here it simplifies the tutorial quite considerably and allows us to focus only on the minimal required set of API usage. I will point out the “disputable” functionality as soon as we get to it. And in the next tutorial, I will show the recommended way of drawing triangles.

To draw our first simple triangle, we need to create a render pass, a framebuffer, and a graphics pipeline. Command buffers are of course also needed, but we already know something about them. We will create simple GLSL shaders and compile them into Khronos’s SPIR*-V language—the only (at this time) form of shaders that Vulkan (officially) understands.

If nothing displays on your computer’s screen, try to simplify the code as much as possible or even go back to the second tutorial. Check whether command buffer that just clears image behaves as expected, and that the color the image was cleared to is properly displayed on the screen. If yes, modify the code and add the parts from this tutorial. Check every return value if it is not VK_SUCCESS. If these ideas don’t help, wait for the tutorial about validation layers.

About the Source Code Example

For this and succeeding tutorials, I’ve changed the sample project. Vulkan preparation phases that were described in the previous tutorials were placed in a “VulkanCommon” class found in separate files (header and source). The class for a given tutorial that is responsible for presenting topics described in a given tutorial, inherits from the “VulkanCommon” class and has access to some (required) Vulkan variables like device or swap chain. This way I can reuse Vulkan creation code and prepare smaller classes focusing only on the presented topics. The code from the earlier chapters works properly so it should also be easier to find potential mistakes.

I’ve also added a separate set of files for some utility functions. Here we will be reading SPIR-V shaders from binary files, so I’ve added a function for checking loading contents of a binary file. It can be found in Tools.cpp and Tools.h files.

Creating a Render Pass

To draw anything on the screen, we need a graphics pipeline. But creating it now will require pointers to other structures, which will probably also need pointers to yet other structures. So we’ll start with a render pass.

What is a render pass? A general picture can give us a “logical” render pass that may be found in many known rendering techniques like deferred shading. This technique consists of many subpasses. The first subpass draws the geometry with shaders that fill the G-Buffer: store diffuse color in one texture, normal vectors in another, shininess in another, depth (position) in yet another. Next for each light source, drawing is performed that reads some of the data (normal vectors, shininess, depth/position), calculates lighting and stores it in another texture. Final pass aggregates lighting data with diffuse color. This is a (very rough) explanation of deferred shading but describes the render pass—a set of data required to perform some drawing operations: storing data in textures and reading data from other textures.

In Vulkan, a render pass represents (or describes) a set of framebuffer attachments (images) required for drawing operations and a collection of subpasses that drawing operations will be ordered into. It is a construct that collects all color, depth and stencil attachments and operations modifying them in such a way that driver does not have to deduce this information by itself what may give substantial optimization opportunities on some GPUs. A subpass consists of drawing operations that use (more or less) the same attachments. Each of these drawing operations may read from some input attachments and render data into some other (color, depth, stencil) attachments. A render pass also describes the dependencies between these attachments: in one subpass we perform rendering into the texture, but in another this texture will be used as a source of data (that is, it will be sampled from). All this data help the graphics hardware optimize drawing operations.

To create a render pass in Vulkan, we call the vkCreateRenderPass() function, which requires a pointer to a structure describing all the attachments involved in rendering and all the subpasses forming the render pass. As usual, the more attachments and subpasses we use, the more array elements containing properly filed structures we need. In our simple example, we will be drawing only into a single texture (color attachment) with just a single subpass.

Render Pass Attachment Description

VkAttachmentDescription attachment_descriptions[] = {

  {

    0,                                          // VkAttachmentDescriptionFlags   flags

    GetSwapChain().Format,                      // VkFormat                       format

    VK_SAMPLE_COUNT_1_BIT,                      // VkSampleCountFlagBits          samples

    VK_ATTACHMENT_LOAD_OP_CLEAR,                // VkAttachmentLoadOp             loadOp

    VK_ATTACHMENT_STORE_OP_STORE,               // VkAttachmentStoreOp            storeOp

    VK_ATTACHMENT_LOAD_OP_DONT_CARE,            // VkAttachmentLoadOp             stencilLoadOp

    VK_ATTACHMENT_STORE_OP_DONT_CARE,           // VkAttachmentStoreOp            stencilStoreOp

    VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,            // VkImageLayout                  initialLayout;

    VK_IMAGE_LAYOUT_PRESENT_SRC_KHR             // VkImageLayout                  finalLayout

  }

};

1.Tutorial03.cpp, function CreateRenderPass()

To create a render pass, first we prepare an array with elements describing each attachment, regardless of the type of attachment and how it will be used inside a render pass. Each array element is of type VkAttachmentDescription, which contains the following fields:

  • flags – Describes additional properties of an attachment. Currently, only an aliasing flag is available, which informs the driver that the attachment shares the same physical memory with another attachment; it is not the case here so we set this parameter to zero.
  • format – Format of an image used for the attachment; here we are rendering directly into a swap chain so we need to take its format.
  • samples – Number of samples of the image; we are not using any multisampling here so we just use one sample.
  • loadOp – Specifies what to do with the image’s contents at the beginning of a render pass, whether we want them to be cleared, preserved, or we don’t care about them (as we will overwrite them all). Here we want to clear the image to the specified value. This parameter also refers to depth part of depth/stencil images.
  • storeOp – Informs the driver what to do with the image’s contents after the render pass (after a subpass in which the image was used for the last time). Here we want the contents of the image to be preserved after the render pass as we intend to display them on screen. This parameter also refers to the depth part of depth/stencil images.
  • stencilLoadOp – The same as loadOp but for the stencil part of depth/stencil images; for color attachments it is ignored.
  • stencilStoreOp – The same as storeOp but for the stencil part of depth/stencil images; for color attachments this parameter is ignored.
  • initialLayout – The layout the given attachment will have when the render pass starts (what the layout image is provided with by the application).
  • finalLayout – The layout the driver will automatically transition the given image into at the end of a render pass.

Some additional information is required for load and store operations and initial and final layouts.

Load op refers to the attachment’s contents at the beginning of a render pass. This operation describes what the graphics hardware should do with the attachment: clear it, operate on its existing contents (leave its contents untouched), or it shouldn’t matter about the contents because the application intends to overwrite them. This gives the hardware an opportunity to optimize memory operations. For example, if we intend to overwrite all of the contents, the hardware won’t bother with them and, if it is faster, may allocate totally new memory for the attachment.

Store op, as the name suggests, is used at the end of a render pass and informs the hardware whether we want to use the contents of the attachment after the render pass or if we don’t care about it and the contents may be discarded. In some scenarios (when contents are discarded) this creates the ability for the hardware to create an image in temporary, fast memory as the image will “live” only during the render pass and the implementations may save some memory bandwidth avoiding writing back data that is not needed anymore.

When an attachment has a depth format (and potentially also a stencil component) load and store ops refer only to the depth component. If a stencil is present, stencil values are treated the way stencil load and store ops describe. For color attachments, stencil ops are not relevant.

Layout, as I described in the swap chain tutorial, is an internal memory arrangement of an image. Image data may be organized in such a way that neighboring “image pixels” are also neighbors in memory, which can increase cache hits (faster memory reading) when image is used as a source of data (that is, during texture sampling). But caching is not necessary when the image is used as a target for drawing operations, and the memory for that image may be organized in a totally different way. Image may have linear layout (which gives the CPU ability to read or populate image’s memory contents) or optimal layout (which is optimized for performance but is also hardware/vendor dependent). So some hardware may have special memory organization for some types of operations; other hardware may be operations-agnostic. Some of the memory layouts may be better suited for some intended image “usages.” Or from the other side, some usages may require specific memory layouts. There is also a general layout that is compatible with all types of operations. But from the performance point of view, it is always best to set the layout appropriate for an intended image usage and it is application’s responsibility to inform the driver about transitions.

Image layouts may be changed using image memory barriers. We did this in the swap chain tutorial when we first changed the layout from the presentation source (image was used by the presentation engine) to transfer destination (we wanted to clear the image with a given color). But layouts, apart from image memory barriers, may also be changed automatically by the hardware inside a render pass. If we specify a different initial layout, subpass layouts (described later), and final layout, the hardware does the transition automatically at the appropriate time.

Initial layout informs the hardware about the layout the application “provides” (or “leaves”) the given attachment with. This is the layout the image starts with at the beginning of a render pass (in our example we acquire the image from the presentation engine so the image has a “presentation source” layout set). Each subpass of a render pass may use a different layout, and the transition will be done automatically by the hardware between subpasses. The final layout is the layout the given attachment will be transitioned into (automatically) at the end of a render pass (after a render pass is finished).

This information must be prepared for each attachment that will be used in a render pass. When graphics hardware receives this information a priori, it may optimize operations and memory during the render pass to achieve the best possible performance.

Subpass Description

VkAttachmentReference color_attachment_references[] = {

  {

    0,                                          // uint32_t                       attachment

    VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL    // VkImageLayout                  layout

  }

};



VkSubpassDescription subpass_descriptions[] = {

  {

    0,                                          // VkSubpassDescriptionFlags      flags

    VK_PIPELINE_BIND_POINT_GRAPHICS,            // VkPipelineBindPoint            pipelineBindPoint

    0,                                          // uint32_t                       inputAttachmentCount

    nullptr,                                    // const VkAttachmentReference   *pInputAttachments

    1,                                          // uint32_t                       colorAttachmentCount

    color_attachment_references,                // const VkAttachmentReference   *pColorAttachments

    nullptr,                                    // const VkAttachmentReference   *pResolveAttachments

    nullptr,                                    // const VkAttachmentReference   *pDepthStencilAttachment

    0,                                          // uint32_t                       preserveAttachmentCount

    nullptr                                     // const uint32_t*                pPreserveAttachments

  }

};

2.Tutorial03.cpp, function CreateRenderPass()

Next we specify the description of each subpass our render pass will include. This is done using VkSubpassDescription structure, which contains the following fields:

  • flags – Parameter reserved for future use.
  • pipelineBindPoint – Type of pipeline in which this subpass will be used (graphics or compute). Our example, of course, uses a graphics pipeline.
  • inputAttachmentCount – Number of elements in the pInputAttachments array.
  • pInputAttachments – Array with elements describing which attachments are used as an input and can be read from inside shaders. We are not using any input attachments here so we set this value to 0.
  • colorAttachmentCount – Number of elements in pColorAttachments and pResolveAttachments arrays.
  • pColorAttachments – Array describing (pointing to) attachments which will be used as color render targets (that image will be rendered into).
  • pResolveAttachments – Array closely connected with color attachments. Each element from this array corresponds to an element from a color attachments array; any such color attachment will be resolved to a given resolve attachment (if a resolve attachment at the same index is not null or if the whole pointer is not null). This is optional and can be set to null.
  • pDepthStencilAttachment – Description of an attachment that will be used for depth (and/or stencil) data. We don’t use depth information here so we can set it to null.
  • preserveAttachmentCount – Number of elements in pPreserveAttachments array.
  • pPreserveAttachments – Array describing attachments that should be preserved. When we have multiple subpasses not all of them will use all attachments. If a subpass doesn’t use some of the attachments but we need their contents in the later subpasses, we must specify these attachments here.

The pInputAttachments, pColorAttachments, pResolveAttachments, pPreserveAttachments, and pDepthStencilAttachment parameters are all of type VkAttachmentReference. This structure contains only these two fields:

  • attachment – Index into an attachment_descriptions array of VkRenderPassCreateInfo.
  • layout – Requested (required) layout the attachment will use during a given subpass. The hardware will perform an automatic transition into a provided layout just before a given subpass.

This structure contains references (indices) into the attachment_descriptions array of VkRenderPassCreateInfo. When we create a render pass we must provide a description of all attachments used during a render pass. We’ve prepared this description earlier in “Render pass attachment description” when we created the attachment_descriptions array. Right now it contains only one element, but in more advanced scenarios there will be multiple attachments. So this “general” collection of all render pass attachments is used as a reference point. In the subpass description, when we fill pColorAttachments or pDepthStencilAttachment members, we provide indices into this very “general” collection, like this: take the first attachment from all render pass attachments and use it as a color attachment. The second attachment from that array will be used for depth data.

There is a separation between a whole render pass and its subpasses because each subpass may use multiple attachments in a different way, that is, in one subpass we are rendering into one color attachment but in the next subpass we are reading from this attachment. In this way, we can prepare a list of all attachments used in the whole render pass, and at the same time we can specify how each attachment will be used in each subpass. And as each subpass may use a given attachment in its own way, we must also specify each image’s layout for each subpass.

So before we can specify a description of all subpasses (an array with elements of type VkSubpassDescription) we must create references for each attachment used in each subpass. And this is what the color_attachment_references variable was created for. When I write a tutorial for rendering into a texture, this usage will be more apparent.

Render Pass Creation

We now have all the data we need to create a render pass.

vkRenderPassCreateInfo render_pass_create_info = {

  VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO,    // VkStructureType                sType

  nullptr,                                      // const void                    *pNext

  0,                                            // VkRenderPassCreateFlags        flags

  1,                                            // uint32_t                       attachmentCount

  attachment_descriptions,                      // const VkAttachmentDescription *pAttachments

  1,                                            // uint32_t                       subpassCount

  subpass_descriptions,                         // const VkSubpassDescription    *pSubpasses

  0,                                            // uint32_t                       dependencyCount

  nullptr                                       // const VkSubpassDependency     *pDependencies

};



if( vkCreateRenderPass( GetDevice(), &render_pass_create_info, nullptr, &Vulkan.RenderPass ) != VK_SUCCESS ) {

  printf( "Could not create render pass!\n" );

  return false;

}



return true;

3.Tutorial03.cpp, function CreateRenderPass()

We start by filling the VkRenderPassCreateInfo structure, which contains the following fields:

  • sType – Type of structure (VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO here).
  • pNext – Parameter not currently used.
  • flags – Parameter reserved for future use.
  • attachmentCount – Number of all different attachments (elements in pAttachments array) used during whole render pass (here just one).
  • pAttachments – Array specifying all attachments used in a render pass.
  • subpassCount – Number of subpasses a render pass consists of (and number of elements in pSubpasses array – just one in our simple example).
  • pSubpasses – Array with descriptions of all subpasses.
  • dependencyCount – Number of elements in pDependencies array (zero here).
  • pDependencies – Array describing dependencies between pairs of subpasses. We don’t have many subpasses so we don’t have dependencies here (set it to null here).

Dependencies describe what parts of the graphics pipeline use memory resource in what way. Each subpass may use resources in a different way. Layouts of each resource may not solely define how they use resources. Some subpasses may render into images or store data through shader images. Other may not use images at all or may read from them at different pipeline stages (that is, vertex or fragment).

This information helps the driver optimize automatic layout transitions and, more generally, optimize barriers between subpasses. When we are writing into images only in a vertex shader there is no point waiting until the fragment shader executes (of course in terms of used images). After all the vertex operations are done, images may immediately change their layouts and memory access type, and even some parts of graphics hardware may start executing the next operations (that are referencing or reading the given images) without the need to wait for the rest of the commands from the given subpass to finish. For now, just remember that dependencies are important from a performance point of view.

So now that we have prepared all the information required to create a render pass, we can safely call the vkCreateRenderPass() function.

Creating a Framebuffer

We have created a render pass. It describes all attachments and all subpasses used during the render pass. But this description is quite abstract. We have specified formats of all attachments (just one image in this example) and described how attachments will be used by each subpass (also just one here). But we didn’t specify WHAT attachments we will be using or, in other words, what images will be used as these attachments. This is done through a framebuffer.

A framebuffer describes specific images that the render pass operates on. In OpenGL*, a framebuffer is a set of textures (attachments) we are rendering into. In Vulkan, this term is much broader. It describes all the textures (attachments) used during the render pass, not only the images we are rendering into (color and depth/stencil attachments) but also images used as a source of data (input attachments).

This separation of render pass and framebuffer gives us some additional flexibility. We can use the given render pass with different framebuffers and a given framebuffer with different render passes, if they are compatible, meaning that they operate in a similar fashion on images of similar types and usages.

Before we can create a framebuffer, we must create image views for each image used as a framebuffer and render pass attachment. In Vulkan, not only in the case of framebuffers, but in general, we don’t operate on images themselves. Images are not accessed directly. For this purpose, image views are used. Image views represent images, they “wrap” images and provide additional (meta)data for them.

Creating Image Views

In this simple application, we want to render directly into swap chain images. We have created a swap chain with multiple images, so we must create an image view for each of them.

const std::vector<VkImage> &swap_chain_images = GetSwapChain().Images;

Vulkan.FramebufferObjects.resize( swap_chain_images.size() );



for( size_t i = 0; i < swap_chain_images.size(); ++i ) {

  VkImageViewCreateInfo image_view_create_info = {

    VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO,   // VkStructureType                sType

    nullptr,                                    // const void                    *pNext

    0,                                          // VkImageViewCreateFlags         flags

    swap_chain_images[i],                       // VkImage                        image

    VK_IMAGE_VIEW_TYPE_2D,                      // VkImageViewType                viewType

    GetSwapChain().Format,                      // VkFormat                       format

    {                                           // VkComponentMapping             components

      VK_COMPONENT_SWIZZLE_IDENTITY,              // VkComponentSwizzle             r

      VK_COMPONENT_SWIZZLE_IDENTITY,              // VkComponentSwizzle             g

      VK_COMPONENT_SWIZZLE_IDENTITY,              // VkComponentSwizzle             b

      VK_COMPONENT_SWIZZLE_IDENTITY               // VkComponentSwizzle             a

    },

    {                                           // VkImageSubresourceRange        subresourceRange

      VK_IMAGE_ASPECT_COLOR_BIT,                  // VkImageAspectFlags             aspectMask

      0,                                          // uint32_t                       baseMipLevel

      1,                                          // uint32_t                       levelCount

      0,                                          // uint32_t                       baseArrayLayer

      1                                           // uint32_t                       layerCount

    }

  };



  if( vkCreateImageView( GetDevice(), &image_view_create_info, nullptr, &Vulkan.FramebufferObjects[i].ImageView ) != VK_SUCCESS ) {

    printf( "Could not create image view for framebuffer!\n" );

    return false;

  }

4.Tutorial03.cpp, function CreateFramebuffers()

To create an image view, we must first create a variable of type VkImageViewCreateInfo. It contains the following fields:

  • sType – Structure type, in this case it should be set to VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO.
  • pNext – Parameter typically set to null.
  • flags – Parameter reserved for future use.
  • image – Handle to an image for which view will be created.
  • viewType – Type of view we want to create. View type must be compatible with an image it is created for.  (that is, we can create a 2D view for an image that has multiple array layers or we can create a CUBE view for a 2D image with six layers).
  • format – Format of  an image view; it must be compatible with the image’s format but may not be the same format (that is, it may be a different format but with the same number of bits per pixel).
  • components – Mapping of an image components into a vector returned in the shader by texturing operations. This applies only to read operations (sampling), but since we are using an image as a color attachment (we are rendering into an image) we must set the so-called identity mapping (R component into R, G -> G, and so on) or just use “identity” value (VK_COMPONENT_SWIZZLE_IDENTITY).
  • subresourceRange – Describes the set of mipmap levels and array layers that will be accessible to a view. If our image is mipmapped, we may specify the specific mipmap level we want to render to (and in case of render targets we must specify exactly one mipmap level of one array layer).

As you can see here, we acquire handles to all swap chain images, and we are referencing them inside a loop. This way we fill the structure required for image view creation, which we pass to a vkCreateImageView() function. We do this for each image that was created along with a swap chain.

Specifying Framebuffer Parameters

Now we can create a framebuffer. To do this we call the vkCreateFramebuffer() function.

VkFramebufferCreateInfo framebuffer_create_info = {

    VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO,  // VkStructureType                sType

    nullptr,                                    // const void                    *pNext

    0,                                          // VkFramebufferCreateFlags       flags

    Vulkan.RenderPass,                          // VkRenderPass                   renderPass

    1,                                          // uint32_t                       attachmentCount

    &Vulkan.FramebufferObjects[i].ImageView,    // const VkImageView             *pAttachments

    300,                                        // uint32_t                       width

    300,                                        // uint32_t                       height

    1                                           // uint32_t                       layers

  };



  if( vkCreateFramebuffer( GetDevice(), &framebuffer_create_info, nullptr, &Vulkan.FramebufferObjects[i].Handle ) != VK_SUCCESS ) {

    printf( "Could not create a framebuffer!\n" );

    return false;

  }

}

return true;

5.Tutorial03.cpp, function CreateFramebuffers()

vkCreateFramebuffer() function requires us to provide a pointer to a variable of type VkFramebufferCreateInfo so we must first prepare it. It contains the following fields:

  • sType – Structure type set to VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO in this situation.
  • pNext – Parameter most of the time set to null.
  • flags – Parameter reserved for future use.
  • renderPass – Render pass this framebuffer will be compatible with.
  • attachmentCount – Number of attachments in a framebuffer (elements in pAttachments array).
  • pAttachments – Array of image views representing all attachments used in a framebuffer and render pass. Each element in this array (each image view) corresponds to each attachment in a render pass.
  • width – Width of a framebuffer.
  • height – Height of a framebuffer.
  • layers – Number of layers in a framebuffer (OpenGL’s layered rendering with geometry shaders, which could specify the layer into which fragments rasterized from a given polygon will be rendered).

The framebuffer specifies what images are used as attachments on which the render pass operates. We can say that it translates image (image view) into a given attachment. The number of images specified for a framebuffer must be the same as the number of attachments in a render pass for which we are creating a framebuffer. Also, each pAttachments array’s element corresponds directly to an attachment in a render pass description structure. Render pass and framebuffer are closely connected, and that’s why we also must specify a render pass during framebuffer creation. But we may use a framebuffer not only with the specified render pass but also with all render passes that are compatible with the one specified. Compatible render passes, in general, must have the same number of attachments and corresponding attachments must have the same format and number of samples. But image layouts (initial, final, and for each subpass) may differ and doesn’t involve render pass compatibility.

After we have finished creating and filling the VkFramebufferCreateInfo structure, we call the vkCreateFramebuffer() function.

The above code executes in a loop. A framebuffer references image views. Here the image view is created for each swap chain image. So for each swap chain image and its view, we are creating a framebuffer. We are doing this in order to simplify the code called in a rendering loop. In a normal, real-life scenario we wouldn’t (probably) create a framebuffer for each swap chain image. I assume that a better solution would be to render into a single image (texture) and after that use command buffers that would copy rendering results from that image into a given swap chain image. This way we will have only three simple command buffers that are connected with a swap chain. All other rendering commands would be independent of a swap chain, making it easier to maintain.

Creating a Graphics Pipeline

Now we are ready to create a graphics pipeline. A pipeline is a collection of stages that process data one stage after another. In Vulkan there is currently a compute pipeline and a graphics pipeline. The compute pipeline allows us to perform some computational work, such as performing physics calculations for objects in games. The graphics pipeline is used for drawing operations.

In OpenGL there are multiple programmable stages (vertex, tessellation, fragment shaders, and so on) and some fixed function stages (rasterizer, depth test, blending, and so on). In Vulkan, the situation is similar. There are similar (if not identical) stages. But the whole pipeline’s state is gathered in one monolithic object. OpenGL allows us to change the state that influences rendering operations anytime we want, we can change parameters for each stage (mostly) independently. We can set up shader programs, depths test, blending, and whatever state we want, and then we can render some objects. Next we can change just some small part of the state and render another object. In Vulkan, such operations can’t be done (we say that pipelines are “immutable”). We must prepare the whole state and set up parameters for pipeline stages and group them in a pipeline object. At the beginning this was one of the most startling pieces information for me. I’m not able to change shader program anytime I want? Why?

The easiest and more valid explanation is because of the performance implications of such state changes. Changing just one single state of the whole pipeline may cause graphics hardware to perform many background operations like state and error checking. Different hardware vendors may implement (and usually are implementing) such functionality differently. This may cause applications to perform differently (meaning unpredictably, performance-wise) when executed on different graphics hardware. So the ability to change anything at any time is convenient for developers. But, unfortunately, it is not so convenient for the hardware.

That’s why in Vulkan the state of the whole pipeline is to gather in one, single object. All the relevant state and error checking is performed when the pipeline object is created. When there are problems (like different parts of pipeline are set up in an incompatible way) pipeline object creation fails. But we know that upfront. The driver doesn’t have to worry for us and do whatever it can to properly use such a broken pipeline. It can immediately tell us about the problem. But during real usage, in performance-critical parts of the application, everything is already set up correctly and can be used as is.

The downside of this methodology is that we have to create multiple pipeline objects, multiple variations of pipeline objects when we are drawing many objects in a different way (some opaque, some semi-transparent, some with depth test enabled, others without). Unfortunately, even different shaders make us create different pipeline objects. If we want to draw objects using different shaders, we also have to create multiple pipeline objects, one for each combination of shader programs. Shaders are also connected with the whole pipeline state. They use different resources (like textures and buffers), render into different color attachments, and read from different attachments (possibly that were rendered into before). These connections must also be initialized, prepared, and set up correctly. We know what we want to do, the driver does not. So it is better and far more logical that we do it, not the driver. In general this approach makes sense.

To begin the pipeline creation process, let’s start with shaders.

Creating a Shader Module

Creating a graphics pipeline requires us to prepare lots of data in the form of structures or even arrays of structures. The first such data is a collection of all shader stages and shader programs that will be used during rendering with a given graphics pipeline bound.

In OpenGL, we write shaders in GLSL. They are compiled and then linked into shader programs directly in our application. We can use or stop using a shader program anytime we want in our application.

Vulkan on the other hand accepts only a binary representation of shaders, an intermediate language called SPIR-V. We can’t provide GLSL code like we did in OpenGL. But there is an official, separate compiler that can transform shaders written in GLSL into a binary SPIR-V language. To use it, we have to do it offline. After we prepare the SPIR-V assembly we can create a shader module from it. Such modules are then composed into an array of VkPipelineShaderStageCreateInfo structures, which are used, among other parameters, to create graphics pipeline.

Here’s the code that creates a shader module from a specified file that contains a binary SPIR-V.

const std::vector<char> code = Tools::GetBinaryFileContents( filename );

if( code.size() == 0 ) {

  return Tools::AutoDeleter<VkShaderModule, PFN_vkDestroyShaderModule>();

}



VkShaderModuleCreateInfo shader_module_create_info = {

  VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO,    // VkStructureType                sType

  nullptr,                                        // const void                    *pNext

  0,                                              // VkShaderModuleCreateFlags      flags

  code.size(),                                    // size_t                         codeSize

  reinterpret_cast<const uint32_t*>(&code[0])     // const uint32_t                *pCode

};



VkShaderModule shader_module;

if( vkCreateShaderModule( GetDevice(), &shader_module_create_info, nullptr, &shader_module ) != VK_SUCCESS ) {

  printf( "Could not create shader module from a %s file!\n", filename );

  return Tools::AutoDeleter<VkShaderModule, PFN_vkDestroyShaderModule>();

}



return Tools::AutoDeleter<VkShaderModule, PFN_vkDestroyShaderModule>( shader_module, vkDestroyShaderModule, GetDevice() );

6.Tutorial03.cpp, function CreateShaderModule()

First we prepare a VkShaderModuleCreateInfo structure that contains the following fields:

  • sType – Type of structure, in this example set to VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO.
  • pNext – Pointer not yet used.
  • flags – Parameter reserved for future use.
  • codeSize – Size in bytes of the code passed in pCode parameter.
  • pCode – Pointer to an array with source code (binary SPIR-V assembly).

To acquire the contents of the file, I have prepared a simple utility function GetBinaryFileContents() that reads the entire contents of a specified file. It returns the content in a vector of chars.

After we prepare a structure, we can call the vkCreateShaderModule() function and check whether everything went fine.

The AutoDeleter<> class from Tools namespace is a helper class that wraps a given Vulkan object handle and takes a function that is called to delete that object. This class is similar to smart pointers, which delete the allocated memory when the object (the smart pointer) goes out of scope. AutoDeleter<> class takes the handle of a given object and deletes it with a provided function when the object of this class’s type goes out of scope.

template<class T, class F>

class AutoDeleter {

public:

  AutoDeleter() :

    Object( VK_NULL_HANDLE ),

    Deleter( nullptr ),

    Device( VK_NULL_HANDLE ) {

  }



  AutoDeleter( T object, F deleter, VkDevice device ) :

    Object( object ),

    Deleter( deleter ),

    Device( device ) {

  }



  AutoDeleter( AutoDeleter&& other ) {

    *this = std::move( other );

  }



  ~AutoDeleter() {

    if( (Object != VK_NULL_HANDLE) && (Deleter != nullptr) && (Device != VK_NULL_HANDLE) ) {

      Deleter( Device, Object, nullptr );

    }

  }



  AutoDeleter& operator=( AutoDeleter&& other ) {

    if( this != &other ) {

      Object = other.Object;

      Deleter = other.Deleter;

      Device = other.Device;

      other.Object = VK_NULL_HANDLE;

    }

    return *this;

  }



  T Get() {

    return Object;

  }



  bool operator !() const {

    return Object == VK_NULL_HANDLE;

  }



private:

  AutoDeleter( const AutoDeleter& );

  AutoDeleter& operator=( const AutoDeleter& );

  T         Object;

  F         Deleter;

  VkDevice  Device;

};

7.Tools.h, -

Why so much effort for one simple object? Shader modules are one of the objects required to create the graphics pipeline. But after the pipeline is created, we don’t need these shader modules anymore. Sometimes it is convenient to keep them as we may need to create additional, similar pipelines. But in this example they may be safely destroyed after we create a graphics pipeline. Shader modules are destroyed by calling the vkDestroyShaderModule() function. But in the example, we would need to call this function in many places: inside multiple “ifs” and at the end of the whole function. Because I don’t want to remember where I need to call this function and, at the same time, I don’t want any memory leaks to occur, I have prepared this simple class just for convenience. Now, I don’t have to remember to delete the created shader module because it will be deleted automatically.

Preparing a Description of the Shader Stages

Now that we know how to create and destroy shader modules, we can create data for shader stages compositing our graphics pipeline. As I have written, the data that describes what shader stages should be active when a given graphics pipeline is bound has a form of an array with elements of type VkPipelineShaderStageCreateInfo. Here is the code that creates shader modules and prepares such an array:

Tools::AutoDeleter<VkShaderModule, PFN_vkDestroyShaderModule> vertex_shader_module = CreateShaderModule( "Data03/vert.spv" );

Tools::AutoDeleter<VkShaderModule, PFN_vkDestroyShaderModule> fragment_shader_module = CreateShaderModule( "Data03/frag.spv" );



if( !vertex_shader_module || !fragment_shader_module ) {

  return false;

}



std::vector<VkPipelineShaderStageCreateInfo> shader_stage_create_infos = {

  // Vertex shader

  {

    VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,        // VkStructureType                                sType

    nullptr,                                                    // const void                                    *pNext

    0,                                                          // VkPipelineShaderStageCreateFlags               flags

    VK_SHADER_STAGE_VERTEX_BIT,                                 // VkShaderStageFlagBits                          stage

    vertex_shader_module.Get(),                                 // VkShaderModule                                 module

    "main",                                                     // const char                                    *pName

    nullptr                                                     // const VkSpecializationInfo                    *pSpecializationInfo

  },

  // Fragment shader

  {

    VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO,        // VkStructureType                                sType

    nullptr,                                                    // const void                                    *pNext

    0,                                                          // VkPipelineShaderStageCreateFlags               flags

    VK_SHADER_STAGE_FRAGMENT_BIT,                               // VkShaderStageFlagBits                          stage

    fragment_shader_module.Get(),                               // VkShaderModule                                 module

    "main",                                                     // const char                                    *pName

    nullptr                                                     // const VkSpecializationInfo                    *pSpecializationInfo

  }

};

8.Tutorial03.cpp, function CreatePipeline()

At the beginning we are creating two shader modules for vertex and fragment stages. They are created with the function presented earlier. When any error occurs and we return from the CreatePipeline() function, any created module is deleted automatically by a wrapper class with a provided deleter function.

The code for the shader modules is read from files that contain the binary SPIR-V assembly. These files are generated with an application called “glslangValidator”. This is a tool distributed officially with the Vulkan SDK and is designed to validate GLSL shaders. But “glslangValidator” also has the capability to compile or rather transform GLSL shaders into SPIR-V binary files. A full explanation of the command line for its usage can be found at the official SDK site. I’ve used the following commands to generate SPIR-V shaders for this tutorial:

glslangValidator.exe -V -H shader.vert > vert.spv.txt

glslangValidator.exe -V -H shader.frag > frag.spv.txt

“glslangValidator” takes a specified file and generates SPIR-V file from it. The type of shader stage is automatically detected by the input file’s extension (“.vert” for vertex shaders, “.geom” for geometry shaders, and so on). The name of the generated file can be specified, but by default it takes a form “<stage>.spv”. So in our example “vert.spv” and “frag.spv” files will be generated.

SPIR-V files have a binary format so it may be hard to read and analyze them—but not impossible. When the “-H” option is used, “glslangValidator” outputs SPIR-V in a form that can be more easily read. This form is printed on standard output and that’s why I’m using the “> *.spv.txt” redirection operator.

Here are the contents of a “shader.vert” file from which SPIR-V assembly was generated for the vertex stage:

#version 400



void main() {

    vec2 pos[3] = vec2[3]( vec2(-0.7, 0.7), vec2(0.7, 0.7), vec2(0.0, -0.7) );

    gl_Position = vec4( pos[gl_VertexIndex], 0.0, 1.0 );

}

9.shader.vert, -

As you can see I have hardcoded the positions of all vertices used to render the triangle. They are indexed using the Vulkan-specific “gl_VertexIndex” built-in variable. In the simplest scenario, when using non-indexed drawing commands (which takes place here) this value starts from the value of the “firstVertex” parameter of a drawing command (zero in the provided example).

This is the disputable part I wrote about earlier—this approach is acceptable and valid but not quite convenient to maintain and also allows us to skip some of the “structure filling” needed to create the graphics pipeline. I’ve chosen it in order to shorten and simplify this tutorial as much as possible. In the next tutorial, I will present a more typical way of drawing any number of vertices, similar to using vertex arrays and indices in OpenGL.

Below is the source code of a fragment shader from the “shader.frag” file that was used to generate the SPIRV-V assembly for the fragment stage:

#version 400



layout(location = 0) out vec4 out_Color;



void main() {

  out_Color = vec4( 0.0, 0.4, 1.0, 1.0 );

}

10.shader.frag, -

In Vulkan’s shaders (when transforming from GLSL to SPIR-V) layout qualifiers are required. Here we specify to what output (color) attachment we want to store the color values generated by the fragment shader. Because we are using only one attachment, we must specify the first available location (zero).

Now that you know how to prepare shaders for applications using Vulkan, we can move on to the next step. After we have created two shader modules, we check whether these operations succeeded. If they did we can start preparing a description of all shader stages that will constitute our graphics pipeline.

For each enabled shader stage we need to prepare an instance of VkPipelineShaderStageCreateInfo structure. Arrays of these structures along with the number of its elements are together used in a graphics pipeline create info structure (provided to the function that creates the graphics pipeline). VkPipelineShaderStageCreateInfo structure has the following fields:

  • sType – Type of structure that we are preparing, which in this case must be equal to VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO.
  • pNext – Pointer reserved for extensions.
  • flags – Parameter reserved for future use.
  • stage – Type of shader stage we are describing (like vertex, tessellation control, and so on).
  • module – Handle to a shader module that contains the shader for a given stage.
  • pName – Name of the entry point of the provided shader.
  • pSpecializationInfo – Pointer to a VkSpecializationInfo structure, which we will leave for now and set to null.

When we are creating a graphics pipeline we don’t create too many (Vulkan) objects. Most of the data is presented in a form of just such structures.

Preparing Description of a Vertex Input

Now we must provide a description of the input data used for drawing. This is similar to OpenGL’s vertex data: attributes, number of components, buffers from which to take data, data’s stride, or step rate. In Vulkan this data is of course prepared in a different way, but in general the meaning is the same. Fortunately, because of the fact that vertex data is hardcoded into a vertex shader in this tutorial, we can almost entirely skip this step and fill the VkPipelineVertexInputStateCreateInfo with almost nulls and zeros:

VkPipelineVertexInputStateCreateInfo vertex_input_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO,    // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineVertexInputStateCreateFlags          flags;

  0,                                                            // uint32_t                                       vertexBindingDescriptionCount

  nullptr,                                                      // const VkVertexInputBindingDescription         *pVertexBindingDescriptions

  0,                                                            // uint32_t                                       vertexAttributeDescriptionCount

  nullptr                                                       // const VkVertexInputAttributeDescription       *pVertexAttributeDescriptions

};

11. Tutorial03.cpp, function CreatePipeline()

But for clarity here is a description of the members of the VkPipelineVertexInputStateCreateInfo structure:

  • sType – Type of structure, VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO here.
  • pNext – Pointer to an extension-specific structure.
  • flags – Parameter reserved for future use.
  • vertexBindingDescriptionCount – Number of elements in the pVertexBindingDescriptions array.
  • pVertexBindingDescriptions – Array with elements describing input vertex data (stride and stepping rate).
  • vertexAttributeDescriptionCount – Number of elements in the pVertexAttributeDescriptions array.
  • pVertexAttributeDescriptions – Array with elements describing vertex attributes (location, format, offset).

Preparing the Description of an Input Assembly

The next step requires us to describe how vertices should be assembled into primitives. As with OpenGL, we must specify what topology we want to use: points, lines, triangles, triangle fan, and so on.

VkPipelineInputAssemblyStateCreateInfo input_assembly_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO,  // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineInputAssemblyStateCreateFlags        flags

  VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST,                          // VkPrimitiveTopology                            topology

  VK_FALSE                                                      // VkBool32                                       primitiveRestartEnable

};

12.Tutorial03.cpp, function CreatePipeline()

We do that through the VkPipelineInputAssemblyStateCreateInfo structure, which contains the following members:

  • sType – Structure type set here to VK_STRUCTURE_TYPE_PIPELINE_INPUT_ASSEMBLY_STATE_CREATE_INFO.
  • pNext – Pointer not yet used.
  • flags – Parameter reserved for future use.
  • topology – Parameter describing how vertices will be organized to form a primitive.
  • primitiveRestartEnable – Parameter that tells whether a special index value (when indexed drawing is performed) restarts assembly of a given primitive.

Preparing the Viewport’s Description

We have finished dealing with input data. Now we must specify the form of output data, all the part of the graphics pipeline that are connected with fragments, like rasterization, window (viewport), depth tests, and so on. The first set of data we must prepare here is the state of the viewport, which specifies to what part of the image (or texture, or window) we want do draw.

VkViewport viewport = {

  0.0f,                                                         // float                                          x

  0.0f,                                                         // float                                          y

  300.0f,                                                       // float                                          width

  300.0f,                                                       // float                                          height

  0.0f,                                                         // float                                          minDepth

  1.0f                                                          // float                                          maxDepth

};



VkRect2D scissor = {

  {                                                             // VkOffset2D                                     offset

    0,                                                            // int32_t                                        x

    0                                                             // int32_t                                        y

  },

  {                                                             // VkExtent2D                                     extent

    300,                                                          // int32_t                                        width

    300                                                           // int32_t                                        height

  }

};



VkPipelineViewportStateCreateInfo viewport_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO,        // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineViewportStateCreateFlags             flags

  1,                                                            // uint32_t                                       viewportCount

  &viewport,                                                    // const VkViewport                              *pViewports

  1,                                                            // uint32_t                                       scissorCount

  &scissor                                                      // const VkRect2D                                *pScissors

};

13.Tutorial03.cpp, function CreatePipeline()

In this example, the usage is simple: we just set the viewport coordinates to some predefined values. I don’t check the size of the swap chain image we are rendering into. But remember that in real-life production applications this has to be done because the specification states that dimensions of the viewport cannot exceed the dimensions of the attachments that we are rendering into.

To specify the viewport’s parameters, we fill the VkViewport structure that contains these fields:

  • x – Left side of the viewport.
  • y – Upper side of the viewport.
  • width – Width of the viewport.
  • height – Height of the viewport.
  • minDepth – Minimal depth value used for depth calculations.
  • maxDepth – Maximal depth value used for depth calculations.

When specifying viewport coordinates, remember that the origin is different than in OpenGL. Here we specify the upper-left corner of the viewport (not the lower left).

Also worth noting is that the minDepth and maxDepth values must be between 0.0 and 1.0 (inclusive) but maxDepth can be lower than minDepth. This will cause the depth to be calculated in “reverse.”

Next we must specify the parameters for the scissor test. The scissor test, similarly to OpenGL, restricts generation of fragments only to the specified rectangular area. But in Vulkan, the scissor test is always enabled and can’t be turned off. We can just provide the values identical to the ones provided for viewport. Try changing these values and see how it influences the generated image.

The scissor test doesn’t have a dedicated structure. To provide data for it we fill the VkRect2D structure which contains two similar structure members. First is VkOffset2D with the following members:

  • x – Left side of the rectangular area used for scissor test
  • y – Upper side of the scissor area

The second member is of type VkExtent2D, which contains the following fields:

  • width – Width of the scissor rectangular area
  • height – Height of the scissor area

In general, the meaning of the data we provide for the scissor test through the VkRect2D structure is similar to the data prepared for viewport.

After we have finished preparing data for viewport and the scissor test, we can finally fill the structure that is used during pipeline creation. The structure is called VkPipelineViewportStateCreateInfo, and it contains the following fields:

  • sType – Type of the structure, VK_STRUCTURE_TYPE_PIPELINE_VIEWPORT_STATE_CREATE_INFO here.
  • pNext – Pointer reserved for extensions.
  • flags – Parameter reserved for future use.
  • viewportCount – Number of elements in the pViewports array.
  • pViewports – Array with elements describing parameters of viewports used when the given pipeline is bound.
  • scissorCount – Number of elements in the pScissors array.
  • pScissors – Array with elements describing parameters of the scissor test for each viewport.

Remember that the viewportCount and scissorCount parameters must be equal. We are also allowed to specify more viewports, but then the multiViewport feature must be also enabled.

Preparing the Rasterization State’s Description

The next part of the graphics pipeline creation applies to the rasterization state. We must specify how polygons are going to be rasterized (changed into fragments), which means whether we want fragments to be generated for whole polygons or just their edges (polygon mode) or whether we want to see the front or back side or maybe both sides of the polygon (face culling). We can also provide depth bias parameters or indicate whether we want to enable depth clamp. This whole state is encapsulated into VkPipelineRasterizationStateCreateInfo. It contains the following members:

  • sType – Structure type, VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO in this example.
  • pNext – Pointer reserved for extensions.
  • flags – Parameter reserved for future use.
  • depthClampEnable – Parameter describing whether we want to clamp depth values of the rasterized primitive to the frustum (when true) or if we want normal clipping to occur (false).
  • rasterizerDiscardEnable – Deactivates fragment generation (discards primitive before rasterization turning off fragment shader).
  • polygonMode – Controls how the fragments are generated for a given primitive (triangle mode): whether they are generated for the whole triangle, only its edges, or just its vertices.
  • cullMode – Chooses the triangle’s face used for culling (if enabled).
  • frontFace – Chooses which side of a triangle should be considered the front (depending on the winding order).
  • depthBiasEnable – Enabled or disables biasing of fragments’ depth values.
  • depthBiasConstantFactor – Constant factor added to each fragment’s depth value when biasing is enabled.
  • depthBiasClamp – Maximum (or minimum) value of bias that can be applied to fragment’s depth.
  • depthBiasSlopeFactor – Factor applied for fragment’s slope during depth calculations when biasing is enabled.
  • lineWidth – Width of rasterized lines.

Here is the source code responsible for setting rasterization state in our example:

VkPipelineRasterizationStateCreateInfo rasterization_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_RASTERIZATION_STATE_CREATE_INFO,   // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineRasterizationStateCreateFlags        flags

  VK_FALSE,                                                     // VkBool32                                       depthClampEnable

  VK_FALSE,                                                     // VkBool32                                       rasterizerDiscardEnable

  VK_POLYGON_MODE_FILL,                                         // VkPolygonMode                                  polygonMode

  VK_CULL_MODE_BACK_BIT,                                        // VkCullModeFlags                                cullMode

  VK_FRONT_FACE_COUNTER_CLOCKWISE,                              // VkFrontFace                                    frontFace

  VK_FALSE,                                                     // VkBool32                                       depthBiasEnable

  0.0f,                                                         // float                                          depthBiasConstantFactor

  0.0f,                                                         // float                                          depthBiasClamp

  0.0f,                                                         // float                                          depthBiasSlopeFactor

  1.0f                                                          // float                                          lineWidth

};

14.Tutorial03.cpp, function CreatePipeline()

In the tutorial we are disabling as many parameters as possible to simplify the process, the code itself, and the rendering operations. The parameters that matter here set up (typical) fill mode for polygon rasterization, back face culling, and similar to OpenGL’s counterclockwise front faces. Depth biasing and clamping are also disabled (to enable depth clamping, we first need to enable a dedicated feature during logical device creation; similarly we must do the same for polygon modes other than “fill”).

Setting the Multisampling State’s Description

In Vulkan, when we are creating a graphics pipeline, we must also specify the state relevant to multisampling. This is done using the VkPipelineMultisampleStateCreateInfo structure. Here are its members:

  • sType – Type of structure, VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO here.
  • pNext – Pointer reserved for extensions.
  • flags – Parameter reserved for future use.
  • rasterizationSamples – Number of per pixel samples used in rasterization.
  • sampleShadingEnable – Parameter specifying that shading should occur per sample (when enabled) instead of per fragment (when disabled).
  • minSampleShading – Specifies the minimum number of unique sample locations that should be used during the given fragment’s shading.
  • pSampleMask – Pointer to an array of static coverage sample masks; this can be null.
  • alphaToCoverageEnable – Controls whether the fragment’s alpha value should be used for coverage calculations.
  • alphaToOneEnable – Controls whether the fragment’s alpha value should be replaced with one.

In this example, I wanted to minimize possible problems so I’ve set parameters to values that generally disable multisampling—just one sample per given pixel with the other parameters turned off. Remember that if we want to enable sample shading or alpha to one, we also need to enable two respective features. Here is a source code that prepares the VkPipelineMultisampleStateCreateInfo structure:

VkPipelineMultisampleStateCreateInfo multisample_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_MULTISAMPLE_STATE_CREATE_INFO,     // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineMultisampleStateCreateFlags          flags

  VK_SAMPLE_COUNT_1_BIT,                                        // VkSampleCountFlagBits                          rasterizationSamples

  VK_FALSE,                                                     // VkBool32                                       sampleShadingEnable

  1.0f,                                                         // float                                          minSampleShading

  nullptr,                                                      // const VkSampleMask                            *pSampleMask

  VK_FALSE,                                                     // VkBool32                                       alphaToCoverageEnable

  VK_FALSE                                                      // VkBool32                                       alphaToOneEnable

};

15.Tutorial03.cpp, function CreatePipeline()

Setting the Blending State’s Description

Another thing we need to prepare when creating a graphics pipeline is a blending state (which also includes logical operations).

VkPipelineColorBlendAttachmentState color_blend_attachment_state = {

  VK_FALSE,                                                     // VkBool32                                       blendEnable

  VK_BLEND_FACTOR_ONE,                                          // VkBlendFactor                                  srcColorBlendFactor

  VK_BLEND_FACTOR_ZERO,                                         // VkBlendFactor                                  dstColorBlendFactor

  VK_BLEND_OP_ADD,                                              // VkBlendOp                                      colorBlendOp

  VK_BLEND_FACTOR_ONE,                                          // VkBlendFactor                                  srcAlphaBlendFactor

  VK_BLEND_FACTOR_ZERO,                                         // VkBlendFactor                                  dstAlphaBlendFactor

  VK_BLEND_OP_ADD,                                              // VkBlendOp                                      alphaBlendOp

  VK_COLOR_COMPONENT_R_BIT | VK_COLOR_COMPONENT_G_BIT |         // VkColorComponentFlags                          colorWriteMask

  VK_COLOR_COMPONENT_B_BIT | VK_COLOR_COMPONENT_A_BIT

};



VkPipelineColorBlendStateCreateInfo color_blend_state_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO,     // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineColorBlendStateCreateFlags           flags

  VK_FALSE,                                                     // VkBool32                                       logicOpEnable

  VK_LOGIC_OP_COPY,                                             // VkLogicOp                                      logicOp

  1,                                                            // uint32_t                                       attachmentCount

  &color_blend_attachment_state,                                // const VkPipelineColorBlendAttachmentState     *pAttachments

  { 0.0f, 0.0f, 0.0f, 0.0f }                                    // float                                          blendConstants[4]

};

16.Tutorial03.cpp, function CreatePipeline()

Final color operations are set up through the VkPipelineColorBlendStateCreateInfo structure. It contains the following fields:

  • sType – Type of the structure, set to VK_STRUCTURE_TYPE_PIPELINE_COLOR_BLEND_STATE_CREATE_INFO in this example.
  • pNext – Pointer reserved for future, extension-specific use.
  • flags – Parameter also reserved for future use.
  • logicOpEnable – Indicates whether we want to enable logical operations on pixels.
  • logicOp – Type of the logical operation we want to perform (like copy, clear, and so on)
  • attachmentCount – Number of elements in the pAttachments array.
  • pAttachments – Array containing state parameters for each color attachment used in a subpass for which the given graphics pipeline is bound.
  • blendConstants – Four-element array with color value used in blending operation (when a dedicated blend factor is used).

More information is needed for the attachmentCount and pAttachments parameters. When we want to perform drawing operations we set up parameters, the most important of which are graphics pipeline, render pass, and framebuffer. The graphics card needs to know how to draw (graphics pipeline which describes rendering state, shaders, test, and so on) and where to draw (the render pass gives general setup; the framebuffer specifies exactly what images are used). As I have already mentioned, the render pass specifies how operations are ordered, what the dependencies are, when we are rendering into a given attachment, and when we are reading from the same attachment. These stages take the form of subpasses. And for each drawing operation we can (but don’t have to) enable/use a different pipeline. But when we are drawing, we must remember that we are drawing into a set of attachments. This set is defined in a render pass, which describes all color, input, depth attachments (the framebuffer just specifies what images are used for each of them). For the blending state, we can specify whether we want to enable blending at all. This is done through the pAttachments array. Each of its elements must correspond to each color attachment defined in a render pass. So the value of attachmentCount elements in the pAttachments array must equal the number of color attachments defined in a render pass.

There is one more restriction. By default all elements in pAttachments array must contain the same values, must be specified in the same way, and must be identical. By default, blending (and color masks) is done in the same way for all attachments. So why it is an array? Why can’t we just specify one value? Because there is a feature that allows us to perform independent, distinct blending for each active color attachment. When we enable the independent blending feature during device creation we can provide different values for each color attachment.

Each pAttachments array’s element is of type VkPipelineColorBlendAttachmentState. It is a structure with the following members:

  • blendEnable – Indicates whether we want to enable blending at all.
  • srcColorBlendFactor – Blending factor for color of the source (incoming) fragment.
  • dstColorBlendFactor – Blending factor for the destination color (stored already in the framebuffer at the same location as the incoming fragment).
  • colorBlendOp – Type of operation to perform (multiplication, addition, and so on).
  • srcAlphaBlendFactor – Blending factor for the alpha value of the source (incoming) fragment.
  • dstAlphaBlendFactor – Blending factor for the destination alpha value (already stored in the framebuffer).
  • alphaBlendOp – Type of operation to perform for alpha blending.
  • colorWriteMask – Bitmask selecting which of the RGBA components are selected (enabled) for writing.

In this example, we disable blending, which causes all other parameters to be irrelevant. Except for colorWriteMask, we select all components for writing but you can freely check what will happen when this parameter is changed to some other R, G, B, A combinations.

Creating a Pipeline Layout

The final thing we must do before pipeline creation is create a proper pipeline layout. A pipeline layout describes all the resources that can be accessed by the pipeline. In this example we must specify how many textures can be used by shaders and which shader stages will have access to them. There are of course other resources involved. Apart from shader stages, we must also describe the types of resources (textures, buffers), their total numbers, and layout. This layout can be compared to OpenGL’s active textures and shader uniforms. In OpenGL we bind textures to the desired texture image units and for shader uniforms we don’t provide texture handles but IDs of the texture image units to which actual textures are bound (we provide the number of the unit which the given texture was associated with).

With Vulkan, the situation is similar. We create some form of a memory layout: first there are two buffers, next we have three textures and an image. This memory “structure” is called a set and a collection of these sets is provided for the pipeline. In shaders, we access specified resources using specific memory “locations” from within these sets (layouts). This is done through a layout (set = X, binding = Y) specifier, which can be translated to: take the resource from the Y memory location from the X set.

And pipeline layout can be thought of as an interface between shader stages and shader resources as it takes these groups of resources, describes how they are gathered, and provides them to the pipeline.

This process is complex and I plan to devote a tutorial to it. Here we are not using any additional resources so I present an example for creating an “empty” pipeline layout:

VkPipelineLayoutCreateInfo layout_create_info = {

  VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO,  // VkStructureType                sType

  nullptr,                                        // const void                    *pNext

  0,                                              // VkPipelineLayoutCreateFlags    flags

  0,                                              // uint32_t                       setLayoutCount

  nullptr,                                        // const VkDescriptorSetLayout   *pSetLayouts

  0,                                              // uint32_t                       pushConstantRangeCount

  nullptr                                         // const VkPushConstantRange     *pPushConstantRanges

};



VkPipelineLayout pipeline_layout;

if( vkCreatePipelineLayout( GetDevice(), &layout_create_info, nullptr, &pipeline_layout ) != VK_SUCCESS ) {

  printf( "Could not create pipeline layout!\n" );

  return Tools::AutoDeleter<VkPipelineLayout, PFN_vkDestroyPipelineLayout>();

}

return Tools::AutoDeleter<VkPipelineLayout, PFN_vkDestroyPipelineLayout>( pipeline_layout, vkDestroyPipelineLayout, GetDevice() );

17.Tutorial03.cpp, function CreatePipelineLayout()

To create a pipeline layout we must first prepare a variable of type VkPipelineLayoutCreateInfo. It contains the following fields:

  • sType – Type of structure, VK_STRUCTURE_TYPE_PIPELINE_LAYOUT_CREATE_INFO in this example.
  • pNext – Parameter reserved for extensions.
  • flags – Parameter reserved for future use.
  • setLayoutCount – Number of descriptor sets included in this layout.
  • pSetLayouts – Pointer to an array containing descriptions of descriptor layouts.
  • pushConstantRangeCount – Number of push constant ranges (I will describe it in a later tutorial).
  • pPushConstantRanges – Array describing all push constant ranges used inside shaders (in a given pipeline).

In this example we create “empty” layout so almost all the fields are set to null or zero.

We are not using push constants here, but they deserve some explanation. Push constants in Vulkan allow us to modify the data of constant variables used in shaders. There is a special, small amount of memory reserved for push constants. We update their values through Vulkan commands, not through memory updates, and it is expected that updates of push constants’ values are faster than normal memory writes.

As shown in the above example, I’m also wrapping pipeline layout in an “AutoDeleter” object. Pipeline layouts are required during pipeline creation, descriptor sets binding (enabling/activating this interface between shaders and shader resources) and push constants setting. None of these operations, except for pipeline creation, take place in this tutorial. So here, after we create a pipeline, we don’t need the layout anymore. To avoid memory leaks, I have used this helper class to destroy the layout as soon as we leave the function in which graphics pipeline is created.

Creating a Graphics Pipeline

Now we have all the resources required to properly create graphics pipeline. Here is the code that does that:

Tools::AutoDeleter<VkPipelineLayout, PFN_vkDestroyPipelineLayout> pipeline_layout = CreatePipelineLayout();

if( !pipeline_layout ) {

  return false;

}



VkGraphicsPipelineCreateInfo pipeline_create_info = {

  VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,              // VkStructureType                                sType

  nullptr,                                                      // const void                                    *pNext

  0,                                                            // VkPipelineCreateFlags                          flags

  static_cast<uint32_t>(shader_stage_create_infos.size()),      // uint32_t                                       stageCount&shader_stage_create_infos[0],                                // const VkPipelineShaderStageCreateInfo         *pStages&vertex_input_state_create_info,                              // const VkPipelineVertexInputStateCreateInfo    *pVertexInputState;&input_assembly_state_create_info,                            // const VkPipelineInputAssemblyStateCreateInfo  *pInputAssemblyState

  nullptr,                                                      // const VkPipelineTessellationStateCreateInfo   *pTessellationState

  &viewport_state_create_info,                                  // const VkPipelineViewportStateCreateInfo       *pViewportState&rasterization_state_create_info,                             // const VkPipelineRasterizationStateCreateInfo  *pRasterizationState&multisample_state_create_info,                               // const VkPipelineMultisampleStateCreateInfo    *pMultisampleState

  nullptr,                                                      // const VkPipelineDepthStencilStateCreateInfo   *pDepthStencilState

  &color_blend_state_create_info,                               // const VkPipelineColorBlendStateCreateInfo     *pColorBlendState

  nullptr,                                                      // const VkPipelineDynamicStateCreateInfo        *pDynamicState

  pipeline_layout.Get(),                                        // VkPipelineLayout                               layout

  Vulkan.RenderPass,                                            // VkRenderPass                                   renderPass

  0,                                                            // uint32_t                                       subpass

  VK_NULL_HANDLE,                                               // VkPipeline                                     basePipelineHandle

  -1                                                            // int32_t                                        basePipelineIndex

};



if( vkCreateGraphicsPipelines( GetDevice(), VK_NULL_HANDLE, 1, &pipeline_create_info, nullptr, &Vulkan.GraphicsPipeline ) != VK_SUCCESS ) {

  printf( "Could not create graphics pipeline!\n" );

  return false;

}

return true;

18.Tutorial03.cpp, function CreatePipeline()

First we create a pipeline layout wrapped in an object of type “AutoDeleter”. Next we fill the structure of type VkGraphicsPipelineCreateInfo. It contains many fields. Here is a brief description of them:

  • sType – Type of structure, VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO here.
  • pNext – Parameter reserved for future, extension-related use.
  • flags – This time this parameter is not reserved for future use but controls how the pipeline should be created: if we are creating a derivative pipeline (if we are inheriting from another pipeline) or if we allow creating derivative pipelines from this one. We can also disable optimizations, which should shorten the time needed to create a pipeline.
  • stageCount – Number of stages described in the pStages parameter; must be greater than zero.
  • pStages – Array with descriptions of active shader stages (the ones created using shader modules); each stage must be unique (we can’t specify a given stage more than once). There also must be a vertex stage present.
  • pVertexInputState – Pointer to a variable contain the description of the vertex input’s state.
  • pInputAssemblyState – Pointer to a variable with input assembly description.
  • pTessellationState – Pointer to a description of the tessellation stages; can be null if tessellation is disabled.
  • pViewportState – Pointer to a variable specifying viewport parameters; can be null if rasterization is disabled.
  • pRasterizationState – Pointer to a variable specifying rasterization behavior.
  • pMultisampleState – Pointer to a variable defining multisampling; can be null if rasterization is disabled.
  • pDepthStencilState – Pointer to a description of depth/stencil parameters; this can be null in two situations: when rasterization is disabled or we’re not using depth/stencil attachments in a render pass.
  • pColorBlendState – Pointer to a variable with color blending/write masks state; can be null also in two situations: when rasterization is disabled or when we’re not using any color attachments inside the render pass.
  • pDynamicState – Pointer to a variable specifying which parts of the graphics pipeline can be set dynamically; can be null if the whole state is considered static (defined only through this create info structure).
  • layout – Handle to a pipeline layout object that describes resources accessed inside shaders.
  • renderPass – Handle to a render pass object; pipeline can be used with any render pass compatible with the provided one.
  • subpass – Number (index) of a subpass in which the pipeline will be used.
  • basePipelineHandle – Handle to a pipeline this one should derive from.
  • basePipelineIndex – Index of a pipeline this one should derive from.

When we are creating a new pipeline, we can inherit some of the parameters from another one. This means that both pipelines should have much in common. A good example is shader code. We don’t specify what fields are the same, but the general message that the pipeline inherits from another one may substantially accelerate pipeline creation. But why are there two fields to indicate a “parent” pipeline? We can’t use them both—only one of them at a time. When we are using a handle, this means that the “parent” pipeline is already created and we are deriving from the one we have provided the handle of. But the pipeline creation function allows us to create many pipelines at once. Using the second parameter, “parent” pipeline index, we can create both “parent” and “child” pipelines in the same call. We just specify an array of graphics pipeline creation info structures and this array is provided to pipeline creation function. So the “basePipelineIndex” is the index of pipeline creation info in this very array. We just have to remember that the “parent” pipeline must be earlier (must have a smaller index) in this array and it must be created with the “allow derivatives” flag set.

In this example we are creating a pipeline with the state being entirely static (null for the “pDynamicState” parameter). But what is a dynamic state? To allow for some flexibility and to lower the number of created pipeline objects, the dynamic state was introduced. We can define through the “pDynamicState” parameter what parts of the graphics pipeline can be set dynamically through additional Vulkan commands and what parts are being static, set once during pipeline creation. The dynamic state includes parameters such as viewports, line widths, blend constants, or some stencil parameters. If we specify that a given state is dynamic, parameters in a pipeline creation info structure that are related to that state are ignored. We must set the given state using the proper Vulkan commands during rendering because initial values of such state may be undefined.

So after these quite overwhelming preparations we can create a graphics pipeline. This is done by calling the vkCreateGraphicsPipelines() function which, among others, takes an array of pointers to the pipeline create info structures. When everything goes well, VK_SUCCESS should be returned by this function and a handle of a graphics pipeline should be stored in a variable we’ve provided the address of. Now we are ready to start drawing.

Preparing Drawing Commands

I introduced you to the concept of command buffers in the previous tutorial. Here I will briefly explain what are they and how to use them.

Command buffers are containers for GPU commands. If we want to execute some job on a device, we do it through command buffers. This means that we must prepare a set of commands that process data (that is, draw something on the screen) and record these commands in command buffers. Then we can submit whole buffers to device’s queues. This submit operation tells the device: here is a bunch of things I want you to do for me and do them now.

To record commands, we must first allocate command buffers. These are allocated from command pools, which can be thought of as memory chunks. If a command buffer needs to be larger (as we record many complicated commands in it) it can grow and use additional memory from a pool it was allocated with. So first we must create a command pool.

Creating a Command Pool

Command pool creation is simple and looks like this:

VkCommandPoolCreateInfo cmd_pool_create_info = {

  VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO,     // VkStructureType                sType

  nullptr,                                        // const void                    *pNext

  0,                                              // VkCommandPoolCreateFlags       flags

  queue_family_index                              // uint32_t                       queueFamilyIndex

};



if( vkCreateCommandPool( GetDevice(), &cmd_pool_create_info, nullptr, pool ) != VK_SUCCESS ) {

  return false;

}

return true;

19.Tutorial03.cpp, function CreateCommandPool()

First we prepare a variable of type VkCommandPoolCreateInfo. It contains the following fields:

  • sType – Standard type of structure, set to VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO here.
  • pNext – Pointer reserved for extensions.
  • flags – Indicates usage scenarios for command pool and command buffers allocated from it; that is, we can tell the driver that command buffers allocated from this pool will live for a short time; for no specific usage we can set it to zero.
  • queueFamilyIndex – Index of a queue family for which we are creating a command pool.

Remember that command buffers allocated from a given pool can only be submitted to a queue from a queue family specified during pool creation.

To create a command pool, we just call the vkCreateCommandPool() function.

Allocating Command Buffers

Now that we have the command pool ready, we can allocate command buffers from it.

VkCommandBufferAllocateInfo command_buffer_allocate_info = {

  VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO, // VkStructureType                sType

  nullptr,                                        // const void                    *pNext

  pool,                                           // VkCommandPool                  commandPool

  VK_COMMAND_BUFFER_LEVEL_PRIMARY,                // VkCommandBufferLevel           level

  count                                           // uint32_t                       bufferCount

};



if( vkAllocateCommandBuffers( GetDevice(), &command_buffer_allocate_info, command_buffers ) != VK_SUCCESS ) {

  return false;

}

return true;

20.Tutorial03.cpp, function AllocateCommandBuffers()

To allocate command buffers we specify a variable of structure type. This time its type is VkCommandBufferAllocateInfo, which contains these members:

  • sType – Type of the structure; VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO for this purpose.
  • pNext – Pointer reserved for extensions.
  • commandPool – Pool from which we want our command buffers to take their memory.
  • level – Command buffer level; there are two levels: primary and secondary; right now we are only interested in primary command buffers.
  • bufferCount – Number of command buffers we want to allocate.

To allocate command buffers, call the vkAllocateCommandBuffers() function and check whether it succeeded. We can allocate many buffers at once with one function call.

I’ve prepared a simple buffer allocating function to show you how some Vulkan functions can be wrapped for easier use. Here is a usage of two such wrapper functions that create command pools and allocate command buffers from them.

if( !CreateCommandPool( GetGraphicsQueue().FamilyIndex, &Vulkan.GraphicsCommandPool ) ) {

  printf( "Could not create command pool!\n" );

  return false;

}



uint32_t image_count = static_cast<uint32_t>(GetSwapChain().Images.size());

Vulkan.GraphicsCommandBuffers.resize( image_count, VK_NULL_HANDLE );



if( !AllocateCommandBuffers( Vulkan.GraphicsCommandPool, image_count, &Vulkan.GraphicsCommandBuffers[0] ) ) {

  printf( "Could not allocate command buffers!\n" );

  return false;

}

return true;

21.Tutorial03.cpp, function CreateCommandBuffers()

As you can see, we are creating a command pool for a graphics queue family index. All image state transitions and drawing operations will be performed on a graphics queue. Presentation is done on another queue (if the presentation queue is different from the graphics queue) but we don’t need a command buffer for this operation.

And we are also allocating command buffers for each swap chain image. Here we take number of images and provide it to this simple “wrapper” function for command buffer allocation.

Recording Command Buffers

Now that we have command buffers allocated from the command pool we can finally record operations that will draw something on the screen. First we must prepare a set of data needed for the recording operation. Some of this data is identical for all command buffers, but some is referencing a specific swap chain image. Here is a code that is independent of swap chain images:

VkCommandBufferBeginInfo graphics_commandd_buffer_begin_info = {

  VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,    // VkStructureType                        sType

  nullptr,                                        // const void                            *pNext

  VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT,   // VkCommandBufferUsageFlags              flags

  nullptr                                         // const VkCommandBufferInheritanceInfo  *pInheritanceInfo

};



VkImageSubresourceRange image_subresource_range = {

  VK_IMAGE_ASPECT_COLOR_BIT,                      // VkImageAspectFlags             aspectMask

  0,                                              // uint32_t                       baseMipLevel

  1,                                              // uint32_t                       levelCount

  0,                                              // uint32_t                       baseArrayLayer

  1                                               // uint32_t                       layerCount

};



VkClearValue clear_value = {

  { 1.0f, 0.8f, 0.4f, 0.0f },                     // VkClearColorValue              color

};



const std::vector<VkImage>& swap_chain_images = GetSwapChain().Images;

22.Tutorial03.cpp, function RecordCommandBuffers()

Performing command buffer recording is similar to OpenGL’s drawing lists where we start recording a list by calling the glNewList() function. Next we prepare a set of drawing commands and then we close the list or stop recording it (glEndList()). So the first thing we need to do is to prepare a variable of type VkCommandBufferBeginInfo. It is used when we start recording a command buffer and it tells the driver about the type, contents, and desired usage of a command buffer. Variables of this type contain the following members:

  • sType – Standard structure type, here set to VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO.
  • pNext – Pointer reserved for extensions.
  • flags – Parameters describing the desired usage (that is, if we want to submit this command buffer only once and destroy/reset it or if it is possible that the buffer will submitted again before the processing of its previous submission has finished).
  • pInheritanceInfo – Parameter used only when we want to record a secondary command buffer.

Next we describe the areas or parts of our images that we will set up image memory barriers for. Here we set up barriers to specify that queues from different families will reference a given image. This is done through a variable of type VkImageSubresourceRange with the following members:

  • aspectMask – Describes a “type” of image, whether it is for color, depth, or stencil data.
  • baseMipLevel – Number of a first mipmap level our operations will be performed on.
  • levelCount – Number of mipmap levels (including base level) we will be operating on.
  • baseArrayLayer – Number of an first array layer of an image that will take part in operations.
  • layerCount – Number of layers (including base layer) that will be modified.

Next we set up a clear value for our images. Before drawing we need to clear images. In previous tutorials, we performed this operation explicitly by ourselves. Here images are cleared as a part of a render pass attachment load operation. We set to “clear” so now we must specify the color to which an image must be cleared. This is done using a variable of type VkClearValue in which we provide R, G, B, A values.

Variables we have created thus far are independent of an image itself, and that’s why we have specified them before a loop. Now we can start recording command buffers:

for( size_t i = 0; i < Vulkan.GraphicsCommandBuffers.size(); ++i ) {

  vkBeginCommandBuffer( Vulkan.GraphicsCommandBuffers[i], &graphics_commandd_buffer_begin_info );



  if( GetPresentQueue().Handle != GetGraphicsQueue().Handle ) {

    VkImageMemoryBarrier barrier_from_present_to_draw = {

      VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,     // VkStructureType                sType

      nullptr,                                    // const void                    *pNext

      VK_ACCESS_MEMORY_READ_BIT,                  // VkAccessFlags                  srcAccessMask

      VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,       // VkAccessFlags                  dstAccessMask

      VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,            // VkImageLayout                  oldLayout

      VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,            // VkImageLayout                  newLayout

      GetPresentQueue().FamilyIndex,              // uint32_t                       srcQueueFamilyIndex

      GetGraphicsQueue().FamilyIndex,             // uint32_t                       dstQueueFamilyIndex

      swap_chain_images[i],                       // VkImage                        image

      image_subresource_range                     // VkImageSubresourceRange        subresourceRange

    };

    vkCmdPipelineBarrier( Vulkan.GraphicsCommandBuffers[i], VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, 0, 0, nullptr, 0, nullptr, 1, &barrier_from_present_to_draw );

  }



  VkRenderPassBeginInfo render_pass_begin_info = {

    VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO,     // VkStructureType                sType

    nullptr,                                      // const void                    *pNext

    Vulkan.RenderPass,                            // VkRenderPass                   renderPass

    Vulkan.FramebufferObjects[i].Handle,          // VkFramebuffer                  framebuffer

    {                                             // VkRect2D                       renderArea

      {                                           // VkOffset2D                     offset

        0,                                          // int32_t                        x

        0                                           // int32_t                        y

      },

      {                                           // VkExtent2D                     extent

        300,                                        // int32_t                        width

        300,                                        // int32_t                        height

      }

    },

      1,                                            // uint32_t                       clearValueCount

      &clear_value                                  // const VkClearValue            *pClearValues

  };



  vkCmdBeginRenderPass( Vulkan.GraphicsCommandBuffers[i], &render_pass_begin_info, VK_SUBPASS_CONTENTS_INLINE );



  vkCmdBindPipeline( Vulkan.GraphicsCommandBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, Vulkan.GraphicsPipeline );



  vkCmdDraw( Vulkan.GraphicsCommandBuffers[i], 3, 1, 0, 0 );



  vkCmdEndRenderPass( Vulkan.GraphicsCommandBuffers[i] );



  if( GetGraphicsQueue().Handle != GetPresentQueue().Handle ) {

    VkImageMemoryBarrier barrier_from_draw_to_present = {

      VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,       // VkStructureType              sType

      nullptr,                                      // const void                  *pNext

      VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT,         // VkAccessFlags                srcAccessMask

      VK_ACCESS_MEMORY_READ_BIT,                    // VkAccessFlags                dstAccessMask

      VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,              // VkImageLayout                oldLayout

      VK_IMAGE_LAYOUT_PRESENT_SRC_KHR,              // VkImageLayout                newLayout

      GetGraphicsQueue().FamilyIndex,               // uint32_t                     srcQueueFamilyIndex

      GetPresentQueue( ).FamilyIndex,               // uint32_t                     dstQueueFamilyIndex

      swap_chain_images[i],                         // VkImage                      image

      image_subresource_range                       // VkImageSubresourceRange      subresourceRange

    };

    vkCmdPipelineBarrier( Vulkan.GraphicsCommandBuffers[i], VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT, 0, 0, nullptr, 0, nullptr, 1, &barrier_from_draw_to_present );

  }

  if( vkEndCommandBuffer( Vulkan.GraphicsCommandBuffers[i] ) != VK_SUCCESS ) {

    printf( "Could not record command buffer!\n" );

    return false;

  }

}

return true;

23.Tutorial03.cpp, function RecordCommandBuffers()

Recording a command buffer is started by calling the vkBeginCommandBuffer() function. At the beginning we set up a barrier that tells the driver that previously queues from one family referenced a given image but now queues from a different family will be referencing it (we need to do this because during swap chain creation we specified exclusive sharing mode). The barrier is set only when the graphics queue is different than the present queue. This is done by calling the vkCmdPipelineBarrier() function. We must specify when in the pipeline the barrier should be placed (VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT) and how the barrier should be set up. Barrier parameters are prepared through the VkImageMemoryBarrier structure:

  • sType – Type of the structure, here set to VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER.
  • pNext – Pointer reserved for extensions.
  • srcAccessMask – Type of memory operations that took place in regard to a given image before the barrier.
  • dstAccessMask – Type of memory operations connected with a given image that will take place after the barrier.
  • oldLayout – Current image memory layout.
  • newLayout – Memory layout image you should have after the barrier.
  • srcQueueFamilyIndex – Index of a family of queues which were referencing image before the barrier.
  • dstQueueFamilyIndex – Index of a queue family queues from which will be referencing image after the barrier.
  • image – Handle to the image itself.
  • subresourceRange – Parts of an image for which we want the transition to occur.

In this example we don’t change the layout of an image, for two reasons: (1) The barrier may not be set at all (if the graphics and present queues are the same), and (2) the layout transition will be performed automatically as a render pass operation (at the beginning of the first—and only—subpass).

Next we start a render pass. We call the vkCmdBeginRenderPass() function for which we must provide a pointer to a variable of VkRenderPassBeginInfo type. It contains the following members:

  • sType – Standard type of structure. In this case we must set it to a value of VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO.
  • pNext – Pointer reserved for future use.
  • renderPass – Handle of a render pass we want to start.
  • framebuffer – Handle of a framebuffer, which specifies images used as attachments for this render pass.
  • renderArea – Area of all images that will be affected by the operations that takes place in this render pass. It specifies the upper-left corner (through x and y parameters of an offset member) and width and height (through extent member) of a render area.
  • clearValueCount – Number of elements in pClearValues array.
  • pClearValues – Array with clear values for each attachment.

When we specify a render area for the render pass, we must make sure that the rendering operations won’t modify pixels outside this area. This is just a hint for a driver so it could optimize its behavior. If we won’t confine operations to the provided area by using a proper scissor test, pixels outside this area may become undefined (we can’t rely on their contents). We also can’t specify a render area that is greater than a framebuffer’s dimensions (falls outside the framebuffer).

And with a pClearValues array, it must contain the elements for each render pass attachment. Each of its members specifies the color to which the given attachment must be cleared when its loadOp is set to clear. For attachments where loadOp is not clear, the values provided for them are ignored. But we can’t provide an array with a smaller amount of elements.

We have begun a command buffer, set a barrier (if necessary), and started a render pass. When we start a render pass we are also starting its first subpass. We can switch to the next subpass by calling the vkCmdNextSubpass() function. During these operations, layout transitions and clear operations may occur. Clears are done in a subpass in which the image is first used (referenced). Layout transitions occur each time a subpass layout is different than the layout in a previous subpass or (in the case of a first subpass or when the image is first referenced) different than the initial layout (layout before the render pass). So in our example when we start a render pass, the swap chain image’s layout is changed automatically from “presentation source” to a “color attachment optimal” layout.

Now we bind a graphics pipeline. This is done by calling the vkCmdBindPipeline() function. This “activates” all shader programs (similar to the glUseProgram() function) and sets desired tests, blending operations, and so on.

After the pipeline is bound, we can finally draw something by calling the vkCmdDraw() function. In this function we specify the number of vertices we want to draw (three), number of instances that should be drawn (just one), and a numbers or indices of a first vertex and first instance (both zero).

Next the vkCmdEndRenderPass() function is called which, as the name suggests, ends the given render pass. Here all final layout transitions occur if the final layout specified for a render pass is different from the layout used in the last subpass the given image was referenced in.

After that, the barrier may be set in which we tell the driver that the graphics queue finished using a given image and from now on the present queue will be using it. This is done, once again, only when the graphics and present queues are different. And after the barrier, we stop recording a command buffer for a given image. All these operations are repeated for each swap chain image.

Drawing

The drawing function is the same as the Draw() function presented in Tutorial 2. We acquire the image’s index, submit a proper command buffer, and present an image. We are using semaphores the same way they were used previously: one semaphore is used for acquiring an image and it tells the graphics queue to wait when the image is not yet available for use. The second command buffer is used to indicate whether drawing on a graphics queue is finished. The present queue waits on this semaphore before it can present an image. Here is the source code of a Draw() function:

VkSemaphore image_available_semaphore = GetImageAvailableSemaphore();

VkSemaphore rendering_finished_semaphore = GetRenderingFinishedSemaphore();

VkSwapchainKHR swap_chain = GetSwapChain().Handle;

uint32_t image_index;



VkResult result = vkAcquireNextImageKHR( GetDevice(), swap_chain, UINT64_MAX, image_available_semaphore, VK_NULL_HANDLE, &image_index );

switch( result ) {

  case VK_SUCCESS:

  case VK_SUBOPTIMAL_KHR:

    break;

  case VK_ERROR_OUT_OF_DATE_KHR:

    return OnWindowSizeChanged();

  default:

    printf( "Problem occurred during swap chain image acquisition!\n" );

    return false;

}



VkPipelineStageFlags wait_dst_stage_mask = VK_PIPELINE_STAGE_TRANSFER_BIT;

VkSubmitInfo submit_info = {

  VK_STRUCTURE_TYPE_SUBMIT_INFO,                // VkStructureType              sType

  nullptr,                                      // const void                  *pNext

  1,                                            // uint32_t                     waitSemaphoreCount

  &image_available_semaphore,                   // const VkSemaphore           *pWaitSemaphores&wait_dst_stage_mask,                         // const VkPipelineStageFlags  *pWaitDstStageMask;

  1,                                            // uint32_t                     commandBufferCount

  &Vulkan.GraphicsCommandBuffers[image_index],  // const VkCommandBuffer       *pCommandBuffers

  1,                                            // uint32_t                     signalSemaphoreCount

  &rendering_finished_semaphore                 // const VkSemaphore           *pSignalSemaphores

};



if( vkQueueSubmit( GetGraphicsQueue().Handle, 1, &submit_info, VK_NULL_HANDLE ) != VK_SUCCESS ) {

  return false;

}



VkPresentInfoKHR present_info = {

  VK_STRUCTURE_TYPE_PRESENT_INFO_KHR,           // VkStructureType              sType

  nullptr,                                      // const void                  *pNext

  1,                                            // uint32_t                     waitSemaphoreCount

  &rendering_finished_semaphore,                // const VkSemaphore           *pWaitSemaphores

  1,                                            // uint32_t                     swapchainCount

  &swap_chain,                                  // const VkSwapchainKHR        *pSwapchains&image_index,                                 // const uint32_t              *pImageIndices

  nullptr                                       // VkResult                    *pResults

};

result = vkQueuePresentKHR( GetPresentQueue().Handle, &present_info );



switch( result ) {

  case VK_SUCCESS:

    break;

  case VK_ERROR_OUT_OF_DATE_KHR:

  case VK_SUBOPTIMAL_KHR:

    return OnWindowSizeChanged();

  default:

    printf( "Problem occurred during image presentation!\n" );

    return false;

}



return true;

24.Tutorial03.cpp, function Draw()

Tutorial 3 Execution

In this tutorial we performed “real” drawing operations. A simple triangle may not sound too convincing, but it is a good starting point for a first Vulkan-created image. Here is what the triangle should look like:

 

 

If you’re wondering why there are black parts in the image, here is an explanation: To simplify the whole code, we created a framebuffer with a fixed size (width and height of 300 pixels). But the window’s size (and the size of the swap chain images) may be greater than these 300 x 300 pixels. The parts of an image that lay outside of the framebuffer’s dimensions are uncleared and unmodified by our application. They may even contain some “artifacts,” because the memory from which the driver allocates the swap chain images may have been previously used for other purposes and could contain some data. The correct behavior is to create a framebuffer with the same size as the swap chain images and to recreate it when the window’s size changes. But as long as the blue triangle is rendered on an orange/gold background, it means that the code works correctly.

Cleaning Up

One last thing to learn before this tutorial ends is how to release resources created during this lesson. I won’t repeat the code needed to release resources created in the previous chapter. Just look at the VulkanCommon.cpp file. Here is the code needed to destroy resources specific to this chapter:

if( GetDevice() != VK_NULL_HANDLE ) {

  vkDeviceWaitIdle( GetDevice() );



  if( (Vulkan.GraphicsCommandBuffers.size() > 0) && (Vulkan.GraphicsCommandBuffers[0] != VK_NULL_HANDLE) ) {

    vkFreeCommandBuffers( GetDevice(), Vulkan.GraphicsCommandPool, static_cast<uint32_t>(Vulkan.GraphicsCommandBuffers.size()), &Vulkan.GraphicsCommandBuffers[0] );

    Vulkan.GraphicsCommandBuffers.clear();

  }



  if( Vulkan.GraphicsCommandPool != VK_NULL_HANDLE ) {

    vkDestroyCommandPool( GetDevice(), Vulkan.GraphicsCommandPool, nullptr );

    Vulkan.GraphicsCommandPool = VK_NULL_HANDLE;

  }



  if( Vulkan.GraphicsPipeline != VK_NULL_HANDLE ) {

    vkDestroyPipeline( GetDevice(), Vulkan.GraphicsPipeline, nullptr );

    Vulkan.GraphicsPipeline = VK_NULL_HANDLE;

  }



  if( Vulkan.RenderPass != VK_NULL_HANDLE ) {

    vkDestroyRenderPass( GetDevice(), Vulkan.RenderPass, nullptr );

    Vulkan.RenderPass = VK_NULL_HANDLE;

  }



  for( size_t i = 0; i < Vulkan.FramebufferObjects.size(); ++i ) {

    if( Vulkan.FramebufferObjects[i].Handle != VK_NULL_HANDLE ) {

      vkDestroyFramebuffer( GetDevice(), Vulkan.FramebufferObjects[i].Handle, nullptr );

      Vulkan.FramebufferObjects[i].Handle = VK_NULL_HANDLE;

    }



    if( Vulkan.FramebufferObjects[i].ImageView != VK_NULL_HANDLE ) {

      vkDestroyImageView( GetDevice(), Vulkan.FramebufferObjects[i].ImageView, nullptr );

      Vulkan.FramebufferObjects[i].ImageView = VK_NULL_HANDLE;

    }

  }

  Vulkan.FramebufferObjects.clear();

}

25.Tutorial03.cpp, function ChildClear()

As usual we first check whether there is any device. If we don’t have a device, we don’t have a resource. Next we wait until the device is free and we delete all the created resources. We start from deleting command buffers by calling a vkFreeCommandBuffers() function. Next we destroy a command pool through a vkDestroyCommandPool() function and after that the graphics pipeline is destroyed. This is achieved through a vkDestroyPipeline() function call. Next we call a vkDestroyRenderPass() function, which releases the handle to a render pass. Finally, all framebuffers and image views associated with each swap chain image are deleted.

Each object destruction is preceded by a check whether a given resource was properly created. If not we skip the process of destruction of such resource.

Conclusion

In this tutorial, we created a render pass with one subpass. Next we created image views and framebuffers for each swap chain image. One of the most difficult parts was to create a graphics pipeline, because it required us to prepare lots of data. We had to create shader modules and describe all the shader stages that should be active when a given graphics pipeline is bound. We had to prepare information about input vertices, their layout, and assembling them into polygons. Viewport, rasterization, multisampling, and color blending information was also necessary. Then we created a simple pipeline layout and after that we could create the pipeline itself. Next we created a command pool and allocated command buffers for each swap chain image. Operations recorded in each command buffer involved setting up an image memory barrier, beginning a render pass, binding a graphics pipeline, and drawing. Next we ended a render pass and set up another image memory barrier. The drawing itself was performed the same way as in the previous tutorial (2).

In the next tutorial, we will learn about the vertex attributes, images and buffers.

Go to: API without Secrets: Introduction to Vulkan* Part 4: Vertex Attributes ( To Be Continued)


Data in the Desert

$
0
0

Ben-Gurion University of the Negev opens a Big Data Lab running Cloudera’s Distribution Including Apache
Hadoop* (CDH*) on the Intel® Xeon® processor.

Intel worked with Ben-Gurion University of the Negev to set up a Big Data Analytics Lab, enabling
information systems engineering students to better understand and develop complex machine learning
algorithms. The Lab is one of the first in the world running Cloudera’s Distribution Including
Apache Hadoop* 5.2 (CDH* 5.2) together with Apache Spark* 1.1, on servers powered by the
Intel® Xeon® processor E5-2630 v2 product family. By removing I/O bottlenecks with distributed
RAM, Apache Spark offers much better utilization of the Intel® processors. This project showcases
the performance gains enabled through these three elements working together.

Download complete Case Study (PDF): Downloadapplication/pdfxeon-e5-ben-gurion-university-case-study.pdf

 

Intel and Cloudera Perform Real-Time Queries to Predict and Prevent Vehicle Failures and Malfunctions

$
0
0

An automotive manufacturing group seeks a Big Data solution that will improve their customers’ user experience. The Company invites three Big Data vendors—one being the Intel-Cloudera partnership—to demonstrate in a controlled environment how their solutions would perform using the Company’s existing hardware, software, and datasets.
After comparing the performance, support, and additional features of all three vendors, the Company chooses Cloudera.

Download complete Solution Brief (PDF): Downloadapplication/pdfIntel and Cloudera Perform Real-Time Queries to Predict and Prevent Vehicle Failures and Malfunction.pdf

Intel and Cloudera* Move a Company’s SaaS Operation to Hadoop* Without Disrupting its C/C++ Development

$
0
0

A European company that creates custom software tools and SaaS for agricultural applications wants to update its environment to a Hadoop*-based Big Data platform, while maintaining its C/C++ legacy. The Company’s current applications deal with yield prediction, livestock breeding best practices, and support for irrigation, fertilization, and crop protection decisions. Their product roadmap calls for adding satellite image processing capabilities to these apps, all of which have been developed—and will continue to be developed—in C/C++.
After a few unsatisfactory attempts to migrate to a Hadoop*-based system on their own, the Company asks Intel to help them integrate their existing C/C++-based operations into Cloudera Enterprise.

Download complete Solution Brief (PDF): Downloadapplication/pdfIntel and Cloudera Move a Company’s SaaS Operation to Hadoop_ Without Disrupting its C_C++ Development.pdf

Intel Designs a Big Data Network Based on Cloudera Enterprise* for a Regional Satellite Dish TV Provider

$
0
0

A regional satellite dish television provider asked Intel to help them redesign their IT infrastructure to accommodate a big data environment. Intel designed their system from the ground up. Before deploying a Hadoop* cluster, you should make certain your environment— including the operating system, network, firewall, and hardware resources—is tuned appropriately to achieve the best performance and avoid common pitfalls encountered during a cluster setup.
In this solution brief, we provide information on common tuning and setup activities required before setting up a Hadoop* cluster.

Download complete Solution Brief (PDF): Downloadapplication/pdfIntel Designs a Big Data Network Based on Cloudera Enterprise for a Regional Satellite Dish TV Provide.pdf

Intel and Cloudera* Generate Positive ROI for an ISV and its Healthcare Provider Customers

Leading Indonesian Telecom Implements Discovery and Analytics Tools

$
0
0

Telkomsel selected Cloudera to support its evolution from a voice- and SMS-based business to one that offers higher value broadband services to customers. Recognized as the biggest mobile operator in Indonesia with over 140 million subscribers, Telkomsel has seen increasing data volumes in its legacy data warehouse, particularly as a result of industry convergence that is driving consumers to perform a wider range of tasks using their mobile devices. This data deluge promises valuable customer and network insights if it can effectively be captured and managed.

Download complete Solution Brief (PDF): Downloadapplication/pdfTelkomsel-UCRA.pdf

Intel® Parallel Computing Center at the Ohio Supercomputer Center

$
0
0

Principal Investigators:

Karen Tomko, Principal Investigator: Dr. Karen Tomko is director of Research and manager of the Scientific Applications group at OSC. She has collaborated with computational scientists for 20 years. Her research interests include communication runtimes, application parallelization/tuning, and programming models for many-core processors. Prior to joining OSC, Tomko spent 11 years as a faculty member in computer science and engineering at the University of Cincinnati and Wright State University. Her experience with scientific applications ranges from crash simulation to fluid dynamics to quantum many body physics. Tomko earned her B.S., M.S. and Ph.D. in Computer Science and Engineering from the University of Michigan.

Robert H. Dodds, Jr., Co-Principal Investigator: Dr. Robert H. Dodds, Jr. is Professor Emeritus at the University of Illinois/Urbana and part-time Research Professor at the University of Tennessee. He leads development/maintenance/documentation efforts for WARP3D. He is the former head of the top-ranked Civil & Environmental Engineering program at Illinois and a member of the U.S. National Academy of Engineering. He maintains active R&D, consults extensively with government agencies, laboratories and industry, and contributes to engineering professional societies. Dodds holds a B.S. in Civil Engineering from the University of Memphis, and earned his M.S. and Ph.D. from the University of Illinois at Urbana-Champaign in Civil Engineering.

Kevin Manalo, Co-Principal Investigator: Dr. Kevin Manalo, senior engineer at the Ohio Supercomputer Center, supports OSC clients with building, installing, supporting, and optimizing HPC software on OSC systems, with a focus on code modernization. Prior to that, Manalo was a senior systems engineer for Computer Sciences Corporation, supporting Alabama Supercomputer Center HPC users. Manalo has a B.S., M.S., and Ph.D. in Nuclear Engineering. His B.S. and M.S. were earned at the University of Florida and his Ph.D. is from the Georgia Institute of Technology.

Description:

This project focuses on the modernization of a solid modeling code with applications in the energy sector: WARP3D. WARP3D is an open source code for 3D nonlinear analysis of solids with a focus on degradation/endurance of metals under demanding thermo-mechanical loading conditions, such as in turbine engine components, high-temperature pressure vessels - nuclear/conventional, petrochemical processing facilities and pipelines. It is used in industry, government laboratories and academia. The project team is a collaboration between the Ohio Supercomputer Center and code developer Robert Dodds. The team will work jointly on all aspects of the modernization project. Outcomes: (1) significantly improve scaling on distributed HPC systems of this open source code with unique capabilities for energy sector R&D, (2) make new releases of code/documentation available to the user-community.

Related websites:

http://www.warp3d.net
http://www.osc.edu


Mobile Viewpoint Delivers HEVC HDR Live Broadcasting with Intel Media Technologies

$
0
0

With Mobile Viewpoint’s WMT AGILE HDR Technology & Intel® Media Server Studio Get Fast, Efficient HEVC Video Encoding

Mobile ViewpointMobile Viewpoint announces that its new bonding transmitter delivers HEVC (H.265) HDR video running on the latest 6th generation Intel® processors and through using Intel® Media Server Studio Professional Editionto optimize HEVC compression and quality. And for broadcast-quality video, Intel’s graphics-accelerated codec enabled Mobile Viewpoint to develop a hardware platform that combines low power hardware-accelerated encoding and transmission. The new HEVC enabled software will be used in Mobile Viewpoint's Wireless Multiplex Terminal (WMT) AGILE high-dynamic range (HDR) back of the camera solutions and in its 19-inch FLEX IO encoding and O2 decoding products.

Mobile Viewpoint Live Reporting Ronde of Norg

Figure 1-Live reporting of the Ronde of Norg using Mobile ViewPoint’s WMT AGILE HDR Technology.

Mobile Viewpoint’s dedicated IP encoding and decoding appliances use Intel® Quick Sync Video to deliver HDR H.265 encoded video using its award-winning bonding technology, which was launched earlier this year. Working with Intel’s Media Server Studio tech experts allowed Mobile Viewpoint to test and develop new encoding appliances at an early stage, so that it could innovate higher resolution and quality video encoding running on Intel processors to help meet the high quality demands of the broadcast industry.

Power Savings + Quality Improvements

After optimizing with Media Server Studio, Mobile Viewpoint saw a power savings of more than 50% and quality improvements of between 25% -40%. For mobile solutions especially, this is a huge benefit because it saves on mobile data and dramatically extends battery life.

Mobile Viewpoint Managing Director Michel Bais says, “The cooperation with Intel opens up a new platform for low power and high quality video encoding appliances using the Intel Media Server Studio Professional Edition with the latest Linux* distributions. This in combination with support and the availability of a complete range of processors makes Intel our key encoding partner.”

“Intel® processors with hardware HEVC encoding and sophisticated media processing software are helping to power Mobile Viewpoint’s live, high-resolution, reliable broadcasting solution so the world can stay better informed of fast-changing news and events on-the-go,” states Jeff McVeigh, Intel Software and Services Group vice president and Visual Computing Products general manager.
 

Broadcasting Live from Every Corner of the World

Mobile Viewpoint’s IP technology enables broadcasters to go live from almost every corner of the world within seconds - using less bandwidth. The deployment of IP transmission solutions by major broadcasters has already shown that this technology is now widely accepted as the major tool for cost-effective and fast news gathering.

Ian Brash, technical manager of Sky Sports News says,“Extensive testing of all the major brands over 30 months showed that the Mobile Viewpoint range best matched our news gathering requirements. Their willingness and ability to write custom software enhances our user experience and matches our unique conditions. This is technology whose time has come.”

Mobile Viewpoint Reporting
More places where Mobile ViewPoint’s WMT AGILE HDR Technology is used to capture some of the world's renowned sporting events, like the Le Tour de France. 

 

Intel NAB boothSee Media Technologies in Action at NAB Show - Nov. 18-21 in Las Vegas

  • See Intel Media Server Studio and other advanced video analysis tools in action at the Intel Booth SU621 (south upper hall).
  •  Meet Mobile Viewpoint at its booth: C2613 (central hall)

Find out more about Intel® Media Server Studio

Free Community Edition        30-Day Professional Edition Trial


About Mobile Viewpoint

Mobile Viewpoint is a global company, focusing on the development and implementation of IP transmission solutions for both the broadcast and security industries. Its H.264 and now H.265 HEVC codec implementations, combined with patented technology, allow for HD video to be transmitted over bonded IP connections. Its customers include major broadcasters such as BBC, Al-Arabiya, Sky Sports News, NBC Sports and more.

Go To Market! GTM Plan? What It Is & Why You Need One

$
0
0

In every developer’s life, there comes a magical moment in time—when your product is finally released. While there is a lot that goes into building the product, when it comes to release time, it’s not quite as simple as throwing your product out there and waiting to see what happens—you need a creative, well-considered go-to-market plan, one that will help your product find the right audience, and attract their interest. Like many of the other business topics we’ve discussed here, this is another one that's better decided upfront. In order to get the most bang for your buck, and to really get the word out, you'll want to get started as soon as possible.

Ask yourself this: How do I get the most out of the resources I have available? The key to a good go-to-market plan is creativity. Even if you have funding and a big staff, you’re going to want to make every last dollar and hour count.

Read on to learn more about creating your go-to-market plan, including the best ways to leverage your network, and how to be smart about connecting with potential customers.
 

Get to Know Your Customers

The first step to selling your product is figuring out who your customers are—and making sure you have a product that they want. This might seem obvious, but it’s always a good reminder that even the best go-to-market plan isn’t going to help sell something that isn’t what your customers asked for or doesn’t solve the right problems. Market validation is a great way to test your product idea, and you can read more about it here.
 

Decide What You’re Going to Say

In a previous post, we discussed the importance of creating a solid value prop—a single sentence that helps communicate what you do, for whom, and why. Once you have that nailed down, you can create your actual messaging.

The goal of your messaging is to inspire and attract. It needs to speak to the actual benefits of your product or your app, both the tangible and the emotional benefits while eliciting excitement.

It’s also important to think about where your messaging will be used, and tailor accordingly. You might have one version for attracting advertisers, and a different version for speaking directly to customers.
 

Find Out Where Your Customers Hang Out

Your go-to-market plan is not just about who your customers are, or what you say to customers—it’s also where you say it. This a big piece of the puzzle that is often forgotten about. Every customer goes through a journey when it comes to purchasing something, and it begins before they’re even ready to buy. It starts when they first become aware of a new product or hear about a new concept.

For instance, to find new games, your customer might read gaming blogs or head over to  Reddit. If something looks interesting, they might check out a trailer—all before heading to your website or searching for you on an app store. The more often you can put your product in front of your customer during this journey, the more likely they are to buy it.

Here are some questions to ask as you try to understand your customer’s purchasing journey:

  • What is your customer’s typical path to purchase? What channels do they frequent on the way? Consider websites, blogs, social, advertising, and app store listings where they might come across your genre, industry or product.
     
  • What motivates them to try or buy a new product? Expert reviews? Video trailers? Free trials?
     
  • Do they pride themselves on being early adopters? If so, consider making a beta version available so they can earn the bragging rights they want, and you can earn a new customer.
     

Make Smart Connections

Once you understand the customer's purchasing journey, you need to develop a plan to connect with your customers along the way, in the language that fits those spaces. Here are some ideas to get your started:

  • Leverage family and friends. Think creatively about who the people you know might know, and how those people can help get the word out. Can someone you know introduce you to a blogger or a YouTuber?
     
  • Social. Remember to think beyond your own favorite social networks. Which ones do your customers frequent? How do they use those social networks to learn about new products and apps?
     
  • Paid advertising. With digital, it’s possible to spend a small amount on a media buy for a highly targeted audience that can be closely measured. You still want to leverage free opportunities, but targeted, thoughtful paid media can be very impactful.
     
  • Shareable image or GIF. Is there a funny way to think about your product or the problem it solves? Any memorable images that come to mind? Creating images or GIFs ahead of time can be a great way to capture attention and encourage people to spread the word.
     
  • Mini-game. Can you extend the image or GIF idea into some kind of game? Something simple and fun that's connected to the product idea, and easy to pass along?
     
  • Trailer. Do your customers like watching trailers? If they frequent gaming blogs and app stores, they might like seeing your product illustrated in a video. Review your competitors to understand how you can differentiate, and what customers expect to see.
     
  • Viral sharing. To encourage sharing, gamify the act of sharing itself! Make it easy, and truly fun, and you’ll create evangelists. Prizes can be credits or points—Share my app and get 5 free credits!—or whatever makes sense based on your product, audience, and message.
     

Adjust as You Move Forward

As always, your go-to-market plan will need to be reviewed and optimized after launch. What worked? What didn’t? Did you find the right customers, and did they engage the way you expected them to? Everything digital is measurable, and the smarter and more thoughtful you can be about processing results and learning from them, the better your product will be. This will be an ongoing effort, so track, measure, optimize and repeat!
 

Can you think of an example of a really creative way that a company has created buzz for its product? Share it in the comments!

Intel® XDK FAQs - Debug & Test

$
0
0

What are the requirements for Testing on Wi-Fi?

  1. Both Intel XDK and App Preview mobile app must be logged in with the same user credentials.
  2. Both devices must be on the same subnet.

Note: Your computer's Security Settings may be preventing Intel XDK from connecting with devices on your network. Double check your settings for allowing programs through your firewall. At this time, testing on Wi-Fi does not work within virtual machines.

How do I configure app preview to work over Wi-Fi?

  1. Ensure that both Intel XDK and App Preview mobile app are logged in with the same user credentials and are on the same subnet
  2. Launch App Preview on the device
  3. Log into your Intel XDK account
  4. Select "Local Apps" to see a list of all the projects in Intel XDK Projects tab
  5. Select desired app from the list to run over Wi-Fi

Note: Ensure the app source files are referenced from the right source directory. If it isn't, on the Projects Tab, change the 'source' directory so it is the same as the 'project' directory and move everything in the source directory to the project directory. Remove the source directory and try to debug over local Wi-Fi.

How do I clear app preview cache and memory?

[Android*] Simply kill the app running on your device as an Active App on Android* by swiping it away after clicking the "Recent" button in the navigation bar. Alternatively, you can clear data and cache for the app from under Settings App > Apps > ALL > App Preview.

[iOS*] By double tapping the Home button then swiping the app away.

[Windows*] You can use the Windows* Cache Cleaner app to do so.

What are the Android* devices supported by App Preview?

We officially only support and test Android* 4.x and higher, although you can use Cordova for Android* to build for Android* 2.3 and above. For older Android* devices, you can use the build system to build apps and then install and run them on the device to test. To help in your testing, you can include the weinre script tag from the Test tab in your app before you build your app. After your app starts up, you should see the Test tab console light up when it sees the weinre script tag contact the device (push the "begin debugging on device" button to see the console). Remember to remove the weinre script tag before you build for the store.

What do I do if Intel XDK stops detecting my Android* device?

When Intel XDK is not running, kill all adb processes that are running on your workstation and then restart Intel XDK as conflicts between different versions of adb frequently causes such issues. Ensure that applications such as Eclipse that run copies of adb are not running. You may scan your disk for copies of adb:

[Linux*/OS X*]:

$ sudo find / -name adb -type f 

[Windows*]:

> cd \> dir /s adb.exe

For more information on Android* USB debug, visit the Intel XDK documentation on debugging and testing.

How do I debug an app that contains third party Cordova plugins?

See the Debug and Test Overview doc page for a more complete overview of your debug options.

When using the Test tab with Intel App Preview your app will not include any third-party plugins, only the "core" Cordova plugins.

The Emulate tab will load the JavaScript layer of your third-party plugins, but does not include a simulation of the native code part of those plugins, so it will present you with a generic "return" dialog box to allow you to execute code associated with third-party plugins.

When debugging Android devices with the Debug tab, the Intel XDK creates a custom debug module that is then loaded onto your USB-connected Android device, allowing you to debug your app AND its third-party Cordova plugins. When using the Debug tab with an iOS device only the "core" Cordova plugins are available in the debug module on your USB-connected iOS device.

If the solutions above do not work for you, then your best bet for debugging an app that contains a third-party plugin is to build it and debug the built app installed and running on your device. 

[Android*]

1) For Crosswalk* or Cordova for Android* build, create an intelxdk.config.additions.xml file that contains the following lines:

<!-- Change the debuggable preference to true to build a remote CDT debuggable app for --><!-- Crosswalk* apps on Android* 4.0+ devices and Cordova apps on Android* 4.4+ devices. --><preference name="debuggable" value="true" /><!-- Change the debuggable preference to false before you build for the store. --> 

and place it in the root directory of your project (in the same location as your other intelxdk.config.*.xml files). Note that this will only work with Crosswalk* on Android* 4.0 or newer devices or, if you use the standard Cordova for Android* build, on Android* 4.4 or greater devices.

2) Build the Android* app

3) Connect your device to your development system via USB and start app

4) Start Chrome on your development system and type "chrome://inspect" in the Chrome URL bar. You should see your app in the list of apps and tabs presented by Chrome, you can then push the "inspect" link to get a full remote CDT session to your built app. Be sure to close Intel XDK before you do this, sometimes there is interference between the version of adb used by Chrome and that used by Intel XDK, which can cause a crash. You might have to kill the adb process before you start Chrome (after you exit the Intel XDK).

[iOS*]

Refer to the instructions on the updated Debug tab docs to get on-device debugging. We do not have the ability to build a development version of your iOS* app yet, so you cannot use this technique to build iOS* apps. However, you can use the weinre script from the Test tab into your iOS* app when you build it and use the Test tab to remotely access your built iOS* app. This works best if you include a lot of console.log messages.

[Windows* 8]

You can use the test tab which would give you a weinre script. You can include it in the app that you build, run it and connect to the weinre server to work with the console.

Alternatively, you can use App Center to setup and access the weinre console (go here and use the "bug" icon).

Another approach is to write console.log messages to a <textarea> screen on your app. See either of these apps for an example of how to do that:

Why does my device show as offline on Intel XDK Debug?

“Media” mode is the default USB connection mode, but due to some unidentified reason, it frequently fails to work over USB on Windows* machines. Configure the USB connection mode on your device for "Camera" instead of "Media" mode.

What do I do if my remote debugger does not launch?

You can try the following to have your app run on the device via debug tab:

  • Place the intelxdk.js library before the </body> tag
  • Place your app specific JavaScript files after it
  • Place the call to initialize your app in the device ready event function

Why do I get an "error installing App Preview Crosswalk" message when trying to debug on device?

You may be running into a RAM or storage problem on your Android device; as in, not enough RAM available to load and install the special App Preview Crosswalk app (APX) that must be installed on your device. See this site (http://www.devicespecifications.com) for information regarding your device. If your device has only 512 MB of RAM, which is a marginal amount for use with the Intel XDK Debug tab, you may have difficulties getting APX to install.

You may have to do one or all of the following:

  • remove as many apps from RAM as possible before installing APX (reboot the device is the simplest approach)
  • make sure there is sufficient storage space in your device (uninstall any unneeded apps on the device)
  • install APX by hand

The last step is the hardest, but only if you are uncomfortable with the command-line:

  1. while attempting to install APX (above) the XDK downloaded a copy of the APK that must be installed on your Android device
  2. find that APK that contains APX
  3. install that APK manually onto your Android device using adb

To find the APK, on a Mac:

$ cd ~/Library/Application\ Support/XDK
$ find . -name *apk

To find the APK, on a Windows machine:

> cd %LocalAppData%\XDK> dir /s *.apk

For each version of Crosswalk that you have attempted to use (via the Debug tab), you will find a copy of the APK file (but only if you have attempted to use the Debug tab and the XDK has successfully downloaded the corresponding version of APX). You should find something similar to:

./apx_download/12.0/AppAnalyzer.apk

following the searches, above. Notice the directory that specifies the Crosswalk version (12.0 in this example). The file named AppAnalyzer.apk is APX and is what you need to install onto your Android device.

Before you install onto your Android device, you can double-check to see if APX is already installed:

  • find "Apps" or "Applications" in your Android device's "settings" section
  • find "App Preview Crosswalk" in the list of apps on your device (there can be more than one)

If you found one or more App Preview Crosswalk apps on your device, you can see which versions they are by using adb at the command-line (this assumes, of course, that your device is connected via USB and you can communicate with it using adb):

  1. type adb devices at the command-line to confirm you can see your device
  2. type adb shell 'pm list packages -f' at the command-line
  3. search the output for the word app_analyzer

The specific version(s) of APX installed on your device end with a version ID. For example:com.intel.app_analyzer.v12 means you have APX for Crosswalk 12 installed on your device.

To install a copy of APX manually, cd to the directory containing the version of APX you want to install and then use the following adb command:

$ adb install AppAnalyzer.apk

If you need to remove the v12 copy of APX, due to crowding of available storage space, you can remove it using the following adb command:

$ adb uninstall com.intel.app_analyzer.v12

or

$ adb shell am start -a android.intent.action.DELETE -d package:com.intel.app_analyzer.v12

The second one uses the Android undelete tool to remove the app. You'll have to respond to a request to undelete on the Android device's screen. See this SO issue for details. Obviously, if you want to uninstall a different version of APX, specify the package ID corresponding to that version of APX.

Why is Chrome remote debug not working with my Android or Crosswalk app?

For a detailed discussion regarding how to use Chrome on your desktop to debug an app running on a USB-connected device, please read this doc page Remote Chrome* DevTools* (CDT).

Check to be sure the following conditions have been met:

  • The version of Chrome on your desktop is greater than or equal to the version of the Chrome webview in which you are debugging your app.

    For example, Crosswalk 12 uses the Chrome 41 webview, so you must be running Chrome 41 or greater on your desktop to successfully attach a remote Chrome debug session to an app built with Crosswalk 12. The native Chrome webview in an Android 4.4.2 device is Chrome 30, so your desktop Chrome must be greater than or equal to Chrome version 30 to debug an app that is running on that native webview.
  • Your Android device is running Android 4.4 or higher, if you are trying to remote debug an app running in the device's native webview, and it is running Android 4.0 or higher if you are trying to remote debug an app running Crosswalk.

    When debugging against the native webview, remote debug with Chrome requires that the remote webview is also Chrome; this is not guaranteed to be the case if your Android device does not include a license for Google services. Some manufacturers do not have a license agreement with Google for distribution of the Google services on their devices and, therefore, may not include Chrome as their native webview, even if they are an Android 4.4 or greater device.
  • Your app has been built to allow for remote debug.

    Within the intelxdk.config.additions.xml file you must include this line: <preference name="debuggable" value="true" /> to build your app for remote debug. Without this option your app cannot be attached to for remote debug by Chrome on your desktop.

How do I detect if my code is running in the Emulate tab?

In the obsolete intel.xdk apis there is a property you can test to detect if your app is running within the Emulate tab or on a device. That property is intel.xdk.isxdk. A simple alternative is to perform the following test:

if( window.tinyHippos )

If the test passes (the result is true) you are executing in the Emulate tab.

Never ending "Transferring your project files to the Testing Device" message from Debug tab; results in no Chrome DevTools debug console.

This is a known issue but a resolution for the problem has not yet been determined. If you find yourself facing this issue you can do the following to help resolve it.

On a Windows machine, exit the Intel XDK and open a "command prompt" window:

> cd %LocalAppData%\XDK\> rmdir cdt_depot /s/q

On a Mac or Linux machine, exit the Intel XDK and open a "terminal" window:

$ find ~ -name global-settings.xdk
$ cd <location-found-above>
$ rm -Rf cdt_depot

Restart the Intel XDK and try the Debug tab again. This procedure is deleting the cached copies of the Chrome DevTools that were retrieved from the corresponding App Preview debug module that was installed on your test device.

One observation that causes this problem is the act of removing one device from your USB and attaching a new device for debug. A workaround that helps sometimes, when switching between devices, is to:

  • switch to the Develop tab
  • close the XDK
  • detach the old device from the USB
  • attach the new device to your USB
  • restart the XDK
  • switch to the Debug tab

Can you integrate the iOS Simulator as a testing platform for Intel XDK projects?

The iOS simulator only runs on Apple Macs... We're trying to make the Intel XDK accessible to developers on the most popular platforms: Windows, Mac and Linux. Additionally, the iOS simulator requires a specially built version of your app to run, you can't just load an IPA onto it for simulation.

What is the purpose of having only a partial emulation or simulation in the Emulate tab?

There's no purpose behind it, it's simply difficult to emulate/simulate every feature and quirk of every device.

Not everyone can afford hardware for testing, especially iOS devices; what can I do?

You can buy a used iPod and that works quite well for testing iOS apps. Of course, the screen is smaller and there is no compass or phone feature, but just about everything else works like an iPhone. If you need to do a lot of iOS testing it is worth the investment. A new iPod costs $200 in the US. Used ones should cost less than that. Make sure you get one that can run iOS 8.

Is testing on Crosswalk on a virtual Android device inside VirtualBox good enough?

When you run the Android emulator you are running on a fictitious device, but it is a better emulation than what you get with the iOS simulator and the Intel XDK Emulate tab. The Crosswalk webview further abstracts the system so you get a very good simulation of a real device. However, considering how inexpensive and easy Android devices are to obtain, we highly recommend you use a real device (with the Debug tab), it will be much faster and even more accurate than using the Android emulator.

Why isn't the Intel XDK emulation as good as running on a real device?

Because the Intel XDK Emulate tab is a Chromium browser, so what you get is the behavior inside that Chromium browser along with some conveniences that make it appear to be a hybrid device. It's poorly named as an emulator, but that was the name given to it by the original Ripple Emulator project. What it is most useful for is simulating most of the core Cordova APIs and your basic application logic. After that, it's best to use real devices with the Debug tab.

Why doesn't my custom splash screen does not show in the emulator or App Preview?

Ensure the splash screen plugin is selected. Custom splash screens only get displayed on a built app. The emulator and app preview will always use Intel XDK splash screens. Please refer to the 9-Patch Splash Screen sample for a better understanding of how splash screens work.

Is there a way to detect if my program has stopped due to using uninitialized variable or an undefined method call?

This is where the remote debug features of the Debug tab are extremely valuable. Using a remote CDT (or remote Safari with a Mac and iOS device) are the only real options for finding such issues. WEINRE and the Test tab do not work well in that situation because when the script stops WEINRE stops.

Why doesn't the Intel XDK go directly to Debug assuming that I have a device connected via USB?

We are working on streamlining the debug process. There are still obstacles that need to be overcome to insure the process of connecting to a device over USB is painless.

Can a custom debug module that supports USB debug with third-party plugins be built for iOS devices, or only for Android devices?

The Debug tab, for remote debug over USB can be used with both Android and iOS devices. Android devices work best. However, at this time, debugging with the Debug tab and third-party plugins is only supported with Android devices (running in a Crosswalk webview). We are working on making the iOS option also support debug with third-party plugins, like what you currently get with Android.

Why does my Android debug session not start when I'm using the Debug tab?

Some Android devices include a feature that prevents some applications and services from auto-starting, as a means of conserving power and maximizing available RAM. On Asus devices, for example, there is an app called the "Auto-start Manager" that manages apps that include a service that needs to start when the Android device starts.

If this is the case on your test device, you need to enable the Intel App Preview application as an app that is allowed to auto-start. See the image below for an example of the Asus Auto-start Manager:

Another thing you can try is manually starting Intel App Preview on your test device before starting a debug session with the Debug tab.

How do I share my app for testing in App Preview?

The only way to retrieve a list of apps in App Preview is to login. If you do not wish to share your credentials, you can create an alternate account and push your app to the cloud using App Preview and share that account's credentials, instead.

I am trying to use Live Layout Editing but I get a message saying Chrome is not installed on my system.

The Live Layout Editing feature of the Intel XDK is built on top of the Brackets Live Preview feature. Most of the issues you may experience with Live Layout Editing can be addressed by reviewing this Live Preview Isn't Working FAQ from the Brackets Troubleshooting wiki. In particular, see the section regarding using Chrome with Live Preview.

Back to FAQs Main

Intel® XDK FAQs - App Designer

$
0
0

Which App Designer framework should I use? Which Intel XDK layout framework is best?

There is no "best" App Designer framework. Each framework has pros and cons. You should choose that framework which serves your application needs best. The list below provides a quick list of pros and cons for each of the frameworks that are available as part of App Designer.

  • Twitter Bootstrap 3 -- PRO: a very clean UI framework that relies primarily on CSS with very little JavaScript trickery. Thriving third-party ecosystem with many plugins and add-ons, including themes. Probably the best place to start, especially for UI beginners. CON: some advanced mobile UI mechanisms (like swipe delete) are not part of this framework.

  • Framework 7 -- PRO: provides pixel perfect layout with device-specific UI elements for Android and iOS platforms. CON: difficult to customize and modify.

  • App Framework 3 -- PRO: an optimized for mobile library that is very lean. App Framework includes the ability to automatically change the theme as a function of the target device. For example, on an Android device the styling of your UI looks like an Android app, on an iOS device the styling looks like an iOS app, etc. CON: not as widely known as some other UI frameworks.

  • Ionic -- PRO: a very sophisticated mobile framework with many features. If you are familiar with and comfortable with Angular this framework may be a good choice for you. CON: tightly coupled with Angular, many features can only be accessed by writing JavaScript Angular directives. If you are not familiar or comfortable with Angular this is not a good choice!

  • Topcoat -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can always use this (or any mobile) framework with the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using Topcoat please visit the Topcoat project page and the Topcoat GitHub repo for documentation.

  • Ratchet -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can always use this (or any mobile) framework with the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using Ratchet please visit the Ratchet project page and the Ratchet GitHub repo for documentation.

  • jQuery Mobile -- This UI framework has been deprecated and will be retired from App Designer in a future release of the Intel XDK. You can always use this (or any mobile) framework with the Intel XDK, but you will have to do so manually, without the help of the Intel XDK App Designer UI layout tool. If you wish to continue using jQuery Mobile please visit the jQuery Mobile API page and jQuery Mobile GitHub page for documentation.

What does the Google* Map widget’s "center type" attribute and its values "Auto calculate,""Address" and "Lat/Long" mean?

The "center type" parameter defines how the map view is centered in your div. It is used to initialize the map as follows:

  • Lat/Long: center the map on a specific latitude and longitude (that you provide on the properties page)
  • Address: center the map on a specific address (that you provide on the properties page)
  • Auto Calculate: center the map on a collection of markers

This is just for initialization of the map widget. Beyond that you must use the standard Google maps APIs to move and/or modify the map. See the "google_maps.js" code for initialization of the widget and some calls to the Google maps APIs. There is also a pointer to the Google maps API at the beginning of the JS file.

To get the current position, you have to use the Geo API, and then push that into the Maps API to display it. The Google Maps API will not give you any device data, it will only display information for you. Please refer to the Intel XDK "Hello, Cordova" sample app for some help with the Geo API. There are a lot of useful comments and console.log messages.

How do I size UI elements in my project?

Trying to implement "pixel perfect" user interfaces with HTML5 apps is not recommended as there is a wide array of device resolutions and aspect ratios and it is impossible to insure you are sized properly for every device. Instead, you should use "responsive web design" techniques to build your UI so that it adapts to different sizes automatically. You can also use the CSS media query directive to build CSS rules that are specific to different screen dimensions.

Note:The viewport is sized in CSS pixels (aka virtual pixels or device independent pixels) and so the physical pixel dimensions are not what you will normally be designing for.

How do I create lists, buttons and other UI elements with the Intel XDK?

The Intel XDK provides you with a way to build HTML5 apps that are run in a webview on the target device. This is analogous to running in an embedded browser (refer to this blog for details). Thus, the programming techniques are the same as those you would use inside a browser, when writing a single-page client-side HTML5 app. You can use the Intel XDK App Designer tool to drag and drop UI elements.

Why is the user interface for Chrome on Android* unresponsive?

It could be that you are using an outdated version of the App Framework* files. You can find the recent versions here. You can safely replace any App Framework files that App Designer installed in your project with more recent copies as App Designer will not overwrite the new files.

How do I work with more recent versions of App Framework* since the latest Intel XDK release?

You can replace the App Framework* files that the Intel XDK automatically inserted with more recent versions that can be found here. App designer will not overwrite your replacement.

Is there a replacement to XPATH in App Framework* for selecting nodes from an XML document?

This FAQ applies only to App Framework 2. App Framework 3 no longer includes a replacement for the jQuery selector library, it expects that you are using standard jQuery.

App Framework is a UI library that implements a subset of the jQuery* selector library. If you wish to use jQuery for XPath manipulation, it is recommend that you use jQuery as your selector library and not App Framework. However, it is also possible to use jQuery with the UI components of App Framework. Please refer to this entry in the App Framework docs.

It would look similar to this:

<script src="lib/jq/jquery.js"></script><script src="lib/af/jq.appframework.js"></script><script src="lib/af/appframework.ui.js"></script>

Why does my App Framework* app that was previously working suddenly start having issues with Android* 4.4?

Ensure you have upgraded to the latest version of App Framework. If your app was built with the now retired Intel XDK "legacy" build system be sure to set the "Targeted Android Version" to 19 in the Android-Crosswalk build settings. The legacy build targeted Android 4.2.

How do I manually set a theme?

If you want to, for example, change the theme only on Android*, you can add the following lines of code:

  1. $.ui.autoLaunch = false; //Stop the App Framework* auto launch right after you load App Framework*
  2. Detect the underlying platform using either navigator.userAgent or intel.xdk.device.platform or window.device.platform. If the platform detected is Android*, set $.ui.useOSThemes=false todisable custom themes and set <div id=”afui” class=”android light”>
  3. Otherwise, set $.ui.useOSThemes=true;
  4. When device ready and document ready have been detected, add $.ui.launch();

How does page background color work in App Framework?

In App Framework the BODY is in the background and the page is in the foreground. If you set the background color on the body, you will see the page's background color. If you set the theme to default App Framework uses a native-like theme based on the device at runtime. Otherwise, it uses the App Framework Theme. This is normally done using the following:

<script>
$(document).ready(function(){ $.ui.useOSThemes = false; });</script>

Please see Customizing App Framework UI Skin for additional details.

What kind of templates can I use to create App Designer projects?

Currently, you can only create App Designer projects by selecting the blank 'HTML5+Cordova' template with app designer (select the app designer check box at the bottom of the template box) and the blank 'Standard HTML5' template with app designer. 

There were app designer versions of the layout and user interface templates that were removed in the Intel XDK 3088 version. 

My AJAX calls do not work on Android; I'm getting valid JSON data with an invalid return code.

The jQuery 1 library appears to be incompatible with the latest versions of the cordova-android framework. To fix this issue you can either upgrade your jQuery library to jQuery 2 or use a technique similar to that shown in the following test code fragment to check your AJAX return codes. See this forum thread for more details. 

The jQuery site only tests jQuery 2 against Cordova/PhoneGap apps (the Intel XDK builds Cordova apps). See the How to Use It section of this jQuery project blog > https://blog.jquery.com/2013/04/18/jquery-2-0-released/ for more information.

Note, in particular, the switch case that checks for zero and 200. This test fragment does not cover all possible AJAX return codes, but should help you if you wish to continue to use a jQuery 1 library as part of your Cordova application.

function jqueryAjaxTest() {

     /* button  #botRunAjax */
     $(document).on("click", "#botRunAjax", function (evt) {
         console.log("function started");
         var wpost = "e=132&c=abcdef&s=demoBASICA";
         $.ajax({
             type: "POST",
             crossDomain: true, //;paf; see http://stackoverflow.com/a/25109061/2914328
             url: "http://your.server.url/address",
             data: wpost,
             dataType: 'json',
             timeout: 10000
         })
         .always(function (retorno, textStatus, jqXHR) { //;paf; see http://stackoverflow.com/a/19498463/2914328
             console.log("jQuery version: " + $.fn.jquery) ;
             console.log("arg1:", retorno) ;
             console.log("arg2:", textStatus) ;
             console.log("arg3:", jqXHR) ;
             if( parseInt($.fn.jquery) === 1 ) {
                 switch (retorno.status) {
                    case 0:
                    case 200:
                        console.log("exit OK");
                        console.log(JSON.stringify(retorno.responseJSON));
                        break;
                    case 404:
                        console.log("exit by FAIL");
                        console.log(JSON.stringify(retorno.responseJSON));
                        break;
                    default:
                        console.log("default switch happened") ;
                        console.log(JSON.stringify(retorno.responseJSON));
                        break ;
                 }
             }
             if( (parseInt($.fn.jquery) === 2) && (textStatus === "success") ) {
                 switch (jqXHR.status) {
                    case 0:
                    case 200:
                        console.log("exit OK");
                        console.log(JSON.stringify(jqXHR.responseJSON));
                        break;
                    case 404:
                        console.log("exit by FAIL");
                        console.log(JSON.stringify(jqXHR.responseJSON));
                        break;
                    default:
                        console.log("default switch happened") ;
                        console.log(JSON.stringify(jqXHR.responseJSON));
                        break ;
                 }
             }
             else {
                console.log("unknown") ;
             }
         });
     });
 }

Back to FAQs Main

China’s Meridian Medical Networks Uses Trusted Analytics Platform to Build Big Data-Driven Hypertension Risk Model

$
0
0

Meridian Medical Network Corp. (Meridian) is an innovative healthcare software and solution developer that serves the HCC market by building a smart health system, including backend big data technologies and frontend mobile applications and monitoring systems.

Meridian’s big data analysis strategy for healthcare incorporates data from both the hospital and consumer. Hospital-generated data include health checkup data, clinical medical records and survey information, and serve as the general “baseline” for providing healthcare services for consumers with certain health conditions. Consumer-generated data include vital signs data streamed continuously from healthcare IoT devices, plus dynamic questionnaire answers from consumers gathered through applications provided by Meridian, which together serve as the personal “calibration” for determining healthcare services for a specific consumer.

Meridian and Intel chose the open source Trusted Analytics Platform (TAP) to build their analytic models and service applications because of its ability to handle large data sets and streamline the analytics workflow.

Download complete White Paper (PDF): Downloadapplication/pdfIntel_Meridian_Whitepaper.pdf

Intel® XDK FAQs - Cordova

$
0
0

How do I set app orientation?

You set the orientation under the Build Settings section of the Projects tab.

To control the orientation of an iPad you may need to create a simply plugin that contains a single plugin.xml file like the following:

<config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><string></string></config-file><config-file target="*-Info.plist" parent="UISupportedInterfaceOrientations~ipad" overwrite="true"><array><string>UIInterfaceOrientationPortrait</string></array></config-file> 

Then add the plugin as a local plugin using the plugin manager on the Projects tab.

HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.

Alternatively, you can use this plugin: https://github.com/yoik/cordova-yoik-screenorientation. Import it as a third-party Cordova* plugin using the plugin manager with the following information:

  • cordova-plugin-screen-orientation
  • specify a version (e.g. 1.4.0) or leave blank for the "latest" version

Or, you can reference it directly from its GitHub repo:

To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.

Is it possible to create a background service using Intel XDK?

Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking), Intel XDK's build system will work with it.

How do I send an email from my App?

You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.

How do you create an offline application?

You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.

How do I work with alarms and timed notifications?

Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the development or debug of plugins, only the use of them as "black boxes" with your HTML5 app. Background services can be accomplished using Java on Android or Objective C on iOS. If a plugin that backgrounds the functions required already exists (for example, this plugin for background geo tracking) the Intel XDK's build system will work with it.

How do I get a reliable device ID?

You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.

How do I implement In-App purchasing in my app?

There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.

How do I install custom fonts on devices?

Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.

How do I access the device's file storage?

You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.

Why isn't AppMobi* push notification services working?

This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.

How do I configure an app to run as a service when it is closed?

If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.

How do I dynamically play videos in my app?

  1. Download the Javascript and CSS files from https://github.com/videojs and include them in your project file.
  2. Add references to them into your index.html file.
  3. Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.

     
    <div class="panel" id="main1" data-appbuilder-object="panel" style=""><video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data-setup="{}"><source src="JAIL.mp4" type="video/mp4"><p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=http://videojs.com/html5-video-support/ target="_blank">supports HTML5 video</a></p></video><a onclick="runVid3()" href="#" class="button" data-appbuilder-object="button">Back</a></div>
  4. When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.

     
    Function runVid2(){
          Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4");
          $.ui.loadContent("#main1",true,false,"pop");
    }
  5. The 'main1' panel opens waiting for the user to click the play button.

NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.

How do I design my Cordova* built Android* app for tablets?

This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.

How do I resolve icon related issues with Cordova* CLI build system?

Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.

<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /><icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />

These are not required in the build system and so you will have to include them in the additions file.

For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file

Is there a plugin I can use in my App to share content on social media?

Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.

Iframe does not load in my app. Is there an alternative?

Yes, you can use the inAppBrowser plugin instead.

Why are intel.xdk.istablet and intel.xdk.isphone not working?

Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.

How do I enable security in my app?

We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit: https://software.intel.com/en-us/app-security-api.

For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: https://software.intel.com/en-us/xdk/article/my-private-photos-sample and https://software.intel.com/en-us/xdk/article/my-private-notes-sample.

Why does my build fail with Admob plugins? Is there an alternative?

Intel XDK does not support the library project that has been newly introduced in the com.google.playservices@21.0.0 plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "com.google.playservices@19.0.0" is a simple jar file that works quite well but the "com.google.playservices@21.0.0" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.

To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "com.google.playservices@19.0.0".

Why does the intel.xdk.camera plugin fail? Is there an alternative?

There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.

How do I resolve Geolocation issues with Cordova?

Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.

Geo fine might not work because of the following reasons:

  1. Your device does not have a GPS chip
  2. It is taking a long time to get a GPS lock (if you are indoors)
  3. The GPS on your device has been disabled in the settings

Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.

Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?

Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.

To make this work you will need to do the following:

  • Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
  • Include the plugin only on the Android* platform and use <video> on iOS*.
  • Create conditional code to do what is appropriate for the platform detected

You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:

  1. Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
  2. Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.

More information is available here and this is what an additions file can look like:

<preference name="debuggable" value="true" /><preference name="StatusBarOverlaysWebView" value="false" /><preference name="StatusBarBackgroundColor" value="#000000" /><preference name="StatusBarStyle" value="lightcontent" /><!-- -iOS* --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="nl.nielsad.cordova.wifiscanner" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="org.apache.cordova.statusbar" /><!-- -Windows*8 --><intelxdk:plugin intelxdk:value="https://github.com/EddyVerbruggen/Flashlight-PhoneGap-Plugin" />

This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.

How do I display a webpage in my app without leaving my app?

The most effective way to do so is by using inAppBrowser.

Does Cordova* media have callbacks in the emulator?

While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.

Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?

This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform framework versions (the Cordova framework version) and plugin versions.

The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.

NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.

Our Cordova CLI 4.1.2 build system is "pinned" to: 

  • cordova-android@3.6.4 (Android Cordova framework version 3.6.4)
  • cordova-ios@3.7.0 (iOS Cordova framework version 3.7.0)
  • cordova-windows@3.7.0 (Cordova Windows framework version 3.7.0)

Our Cordova CLI 5.1.1 build system is "pinned" to:

  • cordova-android@4.1.1 (as of March 23, 2016)
  • cordova-ios@3.8.0
  • cordova-windows@4.0.0

Our Cordova CLI 5.4.1 build system is "pinned" to: 

  • cordova-android@5.0.0
  • cordova-ios@4.0.1
  • cordova-windows@4.3.1

Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform framework versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.

The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).

Technically, the target-specific Cordova frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform framework version that is later than the version that was "pinned" to the CLI when it was originally released by the Cordova project (that is, the Cordova framework versions originally specified by the Cordova CLI x.y.z links above).

The reasons you may see Cordova framework version differences between the Emulate tab, App Preview and your built app are:

  • The Emulate tab has one specific Cordova framework version built into it. We try to make sure that version of the Cordova framework closely matches the default Intel XDK version of Cordova CLI.
  • App Preview is released independently of the Intel XDK and, therefore, may use a different version than what you will see reported by the Emulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.
  • Your app is built with a "pinned" Cordova platform framework version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.
  • For those versions of Crosswalk that are built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version is determined by the Crosswalk project, not by the Intel XDK build system.
  • For those versions of Crosswalk that are built with Intel XDK CLI 5.1.1 and later build systems, the cordova-android framework version equals that specified in the lists above (it equals the "pinned" cordova-android platform version for that CLI version).

Do these Cordova framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova framework version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See this blog for more details about what a webview is and why the webview matters to your app: When is an HTML5 Web App a WebView App?.

The "default version" of the CLI that the Intel XDK uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and the various Intel XDK components. We are not able to provide every release that is made by the Cordova project.

How do I add a third party plugin?

Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.

How do I make an AJAX call that works in my browser work in my app?

Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.

I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?

When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.

When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.

How do I target my app for use only on an iPad or only on an iPhone?

There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in theintelxdk.config.additions.xml file you should get what you need:

<preference name="target-device" value="tablet" />     <!-- Installs on iPad, not on iPhone --><preference name="target-device" value="handset" />    <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --><preference name="target-device" value="universal" />  <!-- Installs on iPhone and iPad correctly -->

If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.

Why does my build fail when I try to use the Cordova* Capture Plugin?

The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.

How can I pinch and zoom in my Cordova* app?

For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:

<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.

Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:

http://blogs.intel.com/evangelists/2014/09/02/html5-web-app-webview-app/

https://software.intel.com/en-us/xdk/docs/why-use-crosswalk-for-android-builds

Another device oriented approach is to enable it by turning on Android accessibility gestures.

How do I make my Android application use the fullscreen so that the status and navigation bars disappear?

The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function AndroidFullScreen.immersiveMode(null, null);.

You can get this third-party plugin from here https://github.com/mesmotronic/cordova-fullscreen-plugin

How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?

The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:

  • copy your XX and XXX icons into your source directory (usually named www)
  • add the following lines to your intelxdk.config.additions.xml file
  • see this Cordova doc page for some more details

Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named www), add lines similar to these into yourintelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):

<!-- for adding xxhdpi and xxxhdpi icons on Android --><icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /><icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /><splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/><splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>

The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.

You can continue to insert the other icons into your app using the Intel XDK Projects tab.

Which plugin is the best to use with my app?

We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.

Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.

See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.

What are the rules for my App ID?

The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.

CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:

  • Each section of the App ID must start with a letter
  • Each section can only consist of letters, numbers, and the underscore character
  • Each section cannot be a Java keyword
  • The App ID must consist of at least 2 sections (each section separated by a period ".").

iOS /usr/bin/codesign error: certificate issue for iOS app?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.

Signing Identity:     "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)"
Provisioning Profile: "MyProvisioningFile"
                      (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1)

    /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app
9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found
Command /usr/bin/codesign failed with exit code 1

** BUILD FAILED **


The following build commands failed:
    CodeSign build/device/MyApp.app
(1 failure)

The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.

iOS Code Sign error: bundle ID does not match app ID?

If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.

Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier.
CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0'

** BUILD FAILED **

The following build commands failed:
    Check dependencies
(1 failure)
Error code 65 for command: xcodebuild with args: -xcconfig,...

The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."

What are plugin variables used for? Why do I need to supply plugin variables?

Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.

What happened to the Intel XDK "legacy" build options?

On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.

Many of the intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.

Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.

The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.

More information can be found in this forum post > https://software.intel.com/en-us/forums/intel-xdk/topic/601436.

Which build files do I submit to the Windows Store and which do I use for testing my app on a device?

There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:

  • appx works best for side-loading, and can also be used to publish your app.
  • appxupload is preferred for publishing your app, it will not work for side-loading.
  • appxbundle will work for both publishing and side-loading, but is not preferred.
  • xap is for legacy Windows Phone; works for both publishing and side-loading.

In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.

Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.

Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.

The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.

How do I implement local storage or SQL in my app?

See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.

How do I prevent my app from auto-completing passwords?

Use the Ionic Keyboard plugin and set the spellcheck attribute to false.

Why does my PHP script not run in my Intel XDK Cordova app?

Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).

Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.

Following is a lightly edited recommendation from an Intel XDK user:

I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.

Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.

And finally, I'm using PouchDB and CouchDB"A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.

I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.

Why doesn’t my Cocos2D game work on iOS?

This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK. 

The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):

Generic cocos2D fix -

1. Inside the loadTxt function, xhr.onload should be defined as

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo);
    };

instead of

xhr.onload = function () {
    if(xhr.readyState == 4)
        xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo);
    };

2. The condition inside _loadTxtSync function should be changed to 

if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {

instead of 

if (!xhr.readyState == 4 || xhr.status != 200) {

 

App Preview fix -

Add this line inside of loadTxtSync after _xhr.open:

xhr.setRequestHeader("iap_isSyncXHR", "true");

How do I change the alias of my Intel XDK Android keystore certificate?

If you have used a non-ASCII character in your Android app keystore (certificate), especially when you converted your legacy keystore to the new format used by version 3088 of the Intel XDK, you may be having trouble with failing builds or builds that do not work. You cannot change the alias within the Intel XDK, but you can download the existing keystore with the problem alias, change the alias on that keystore, and upload a new copy of the same keystore with a new alias.

Use the following procedure:

  • Download the converted legacy keystore from the Intel XDK (the one with the bad alias).
  • Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at %ProgramFiles%\Java\jre8\bin (you might have to adjust the value of jre8 in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in /usr/bin).
  • Change the alias of the keystore using this command:
    keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass

    See the keytool -changealias -help command for additional details.
  • Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.

What causes "The connection to the server was unsuccessful. (file:///android_asset/www/index.html)" error?

See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.

How do I manually sign my Android or Crosswalk APK file with the Intel XDK?

To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:

Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.

These two sections of the Android developer Signing Your Applications article are also worth reading:

 

Back to FAQs Main

New Comprimato JPEG2000 Codec Now Native for Intel® Media Server Studio

$
0
0

New Plugin Delivers High Quality Imagery with Low Latency

Comprimato2Comprimato has been working with Intel on providing the best video encoding technology as part of Intel® Media Server Studio, which provides an Intel® Media SDK, runtimes, graphics drivers, and advanced analysis tools to help video solution providers deliver fast, high-density media transcoding.

Headquartered in the Czech Republic and based in San Francisco, Comprimato now provides a plug-in for the software, which delivers high quality, low latency JPEG2000 encoding. The result is a powerful encoding option available to users of Media Server Studio so that they can transcode JPEG2000 contained in IMF, AS02 or MXF OP1a files to distribution formats like AVC/H.264 and HEVC/H.265 and enable software-defined processes of IP video streams in broadcast applications. The Comprimato plugin, together with Media Server Studio, use the massive power of Intel graphics processors (GPUs) for fast encoding.

JPEG2000 is an excellent codec choice for creating high-quality media applications. Because it uses wavelet compression, it delivers strong image quality, which even under extreme bandwidth pressure, degrades gracefully with just a very subtle softening of the image. In general use, it is regarded as an excellent choice for high performance video applications like digital cinema, video production and archiving, or for instance, contribution (backhaul) links between remote broadcasts and the studio.

JPEG2000 is not just for the video standards of today, but also for emerging formats. The JPEG2000 was designed with an emphasis on future applications allowing for virtually unlimited resolutions (JPEG2000 specification allows for up to 2^32-1 x 2^32-1 pixels per each image frame), so handling 4K or 8K Ultra HD without tiling is a piece of cake. With Comprimato’s codec technology, JPEG2000 users can handle high frame rates including 60 FPS and 120 FPS and extended bit depths for high-dynamic range (HDR) video.

Comprimato has developed new ways of implementing JPEG2000, by making innovative use of the multiple computing cores available in today’s graphics processing units (GPU), such as those in 5th and 6th generation Intel® processors. By using Intel Media Server Studio to access hardware-acceleration and programmable graphics in Intel GPUs, encoding can run super fast. This is a vital benefit because fast media processing significantly reduces latency in the connection, which is particularly important in live broadcasting.

Comprimato also integrated its ground-breaking multi-threading codec technology into its JPEG2000 plug-in, which works with Intel Media Server Studio’s Media SDK so that developers using the software package for developing media applications have simple access to brilliant encoding technology on the market. System architects can tap in to the Comprimato compression to build highly robust, compliant and extremely fast encoders in their media transcoding solutions.

The plug-in is available now; more information can be requested by contacting Comprimato at www.comprimato.com.
 

Intel NAB booth

 

Special Events at NAB Show

More information about the Comprimato JPEG2000 Plug-In is also available at NAB Show, the world’s largest media and broadcasting event at the Las Vegas Convention Center, April 18–21 at the Intel booth, SU621 (south upper hall). Comprimato's booth is at SU13305.

  • If you can't be at NAB - listen to a special Intel Chip Chat interview at NAB Show with Jiri Matela, Comprimato CEO on Monday, April 18 - available after 3 p.m at this link.
  • Tuesday, 4/19, 1-3 p.m. - Comprimato's Matus Madzin will be in the Intel booth: SU621 for questions and interest about the the JPEG2000 plugin.

Intel® 17.0 compilers for OS X* Xcode* integration now supports all installed updates

$
0
0

The Intel® 17.0 C++ and Fortran compilers for OS X* now support all updates within a major version of the compiler in the Xcode* IDE integration.  This makes it possible to switch between all installed versions of the compiler, including major versions, updates, and betas in the Xcode* IDE.  There is also an option to select the 'Latest Release'.

Live Webinar: Boost Python* Performance with Intel® Math Kernel Library

$
0
0
Python* is a popular open-source scripting language known for its easy-to-learn syntax and active developer community. Performance, however, remains a key drawback due to Python being an interpreted language and the implementation of the GIL lock.

This joint webinar with Continuum Analytics explores tools and techniques to boost the performance of your Python applications, that can be easily accessed by all Python developers and users.

The talk will accelerate numerical computations in numpy, scipy, & scikit-learn with native libraries like Intel® MKL and Intel® DAAL and how to write high performance Python with JIT compilers like Numba. Intel® Distribution for Python* is an easy-to-install, optimized Python distribution that includes the popular NumPy* and SciPy* stack packages used for scientific, engineering, and data analysis. It tunes and leverages the powerful Intel® Math Kernel Library to offer significant performance gains, enhancing the performance profile of your application.

For example, DGEMM functions deliver 3x speed-ups on single-core and show impressive scalability on multiple cores. The easy, out-of-the box installation saves you time and effort, so even a novice Python user can focus on the application at hand, rather than setting up the Python infrastructure.

 

When: Tue, May 3, 2016 9:00 AM - 10:00 AM PDT

Register Here>

Live Webinar: Performance Analysis of Python* Applications with Intel® VTune™ Amplifier

$
0
0
Efficient profiling techniques can help dramatically improving the performance of your Python* code by detecting time, CPU, and memory bottlenecks. This session discusses the need, advantages, and common tools and techniques for profiling Python applications, followed by a demo of Intel® VTune Amplifier and its capabilities to profile both pure Python code and code heavily relying on C extensions.

 

When: Tue, May 24, 2016 9:00 AM - 10:00 AM PDT

Register Here>

Open Source Project: Intel Data Analytics Acceleration Library (DAAL)

$
0
0

We have created a data analytics acceleration project on github, to help accelerate data analytics applications. We have placed the Intel® Data Analytics Acceleration Library (Intel® DAAL), the high performance analytics (for "Big Data") library for x86 and x86-64, into open source to create this project

Intel DAAL helps accelerate big data analytics by providing highly optimized algorithmic building blocks for all data analysis stages (preprocessing, transformation, analysis, modeling, validation, and decision making) for batch, online and distributed processing modes of computation. It’s designed for use with popular data platforms including Hadoop*, Spark*, R, and Matlab* for highly efficient data access. Intel DAAL is available for Linux*, OS X* and Windows* and is licensed with the Apache 2.0 license. The DAAL project is available on github for download, feedback and contributions.

Intel DAAL has benefited from customer feedback since its initial release in 2015. Following a year of intense feedback and additional development as a full product, we are excited to introduce it as a very solid open source project ready for use and participation. Intel DAAL remains an integral part of Intel's software developer tools and is backed by Intel with support and future development investments.


WHERE


ACCELERATE DATA ANALYTICS

The Intel Data Analytics Acceleration Library (Intel DAAL) is a library delivering high performance machine learning and data analytics algorithms.  Intel DAAL is an essential component of Intel’s overall machine learning solution including Intel® Xeon® Processor E7 Family, the Trusted Analytics Platform and Intel® Xeon Phi™ Processors (Knights Landing). Intel DAAL works with a wide selection of data platforms and programming languages including Hadoop, Spark, Python, Java and C++. Intel DAAL was first released in 2015 without source code to give us time to evolve some interfaces on our path to open sourcing this year. We appreciate the many users who have given feedback and encouraged us to get where we are today. Previous versions of Intel DAAL required separate installation of the Intel Math Kernel Library (Intel MKL) and Intel Integrated Performance Primitives (Intel IPP). The latest version of Intel DAAL actually comes with the necessary binary parts of Intel MKL (for BLAS and LAPACK) as well as Intel IPP (compression and decompression) so that the tremendous performance from these key routines are available automatically with no additional downloads needed! In order to make the most of multicore and many-core parallelism, and for superior threading interoperability, it is notable that the threading in Intel DAAL relies on the open source project known as "TBB" (Intel Threading Building Blocks).


EXPERIENCE PERFORMANCE

In the exciting and rapidly-evolving data analytics market, this key Intel performance library can really boost performance. At the Intel Developers Forum in 2015, Capital One discussed significant acceleration (over 200X - see slide 26) as an early user of Intel DAAL. We've seen numerous examples across many industries in the first year of product of substantial performance improvements using Intel DAAL -  it is definitely worth a try!

Many more details about the product are available on the product page including some benchmarking data to share more related to the potential performance gains when using DAAL.


SPEEDING TOWARD 2017 - JOIN US!

DAAL is currently speeding toward a "2017" release (expected in late Q3 2016) in conjunction with Intel's award winning Intel Parallel Studio suite of developer tools.  Precompiled binaries with installers are available for free as part of the beta program. Registration for the beta is available at tinyurl.com/ipsbeta2017.

The open source project feeds the product; there are no features held exclusively for the product version. The only difference when purchased is that Intel's Premier support is included for the entire product.

Support for all users of Intel DAAL is available online through the online Intel DAAL forum.

Interview with Martin Hall (Director of Marketing & Business Development) on Big Data & Analytics at Intel

$
0
0

In this theCube interview, Martin Hall, Director of Marketing & Business Development, Big Data & Analytics at Intel, discusses the benefits of Open Source software and how TAP makes it easier for data scientists and developers to deploy big data analytics projects. theCUBE is the leading live internet interview show covering enterprise technology and innovation.

View complete interview (YouTube)

Viewing all 3384 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>