Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Using the Intel® RealSense™ Camera with TouchDesigner*: Part 1

$
0
0

Download Demo Files ZIP 35KB

TouchDesigner*, created by Derivative, is a popular platform/program used worldwide for interactivity and real-time animations during live performances as well as rendering 3D animation sequences, building mapping, installations and recently, VR work. The support of the Intel® RealSense™ camera in TouchDesigner makes it an even more versatile and powerful tool. Also useful is the ability to import objects and animations into TouchDesigner from other 3D packages using .fbx files, as well as taking in rendered animations and images.

In this two-part article I explain how the Intel RealSense camera is integrated into and can be used in TouchDesigner. The demos in Part 1 use the Intel RealSense camera TOP node. The demos in Part 2 use the CHOP node. In Part 2, I also explain how to create VR and full-dome sequences in combination with the Intel RealSense camera. I show how TouchDesigner’s Oculus Rift node can be used in conjunction with the Intel RealSense camera. Both Part 1 and 2 include animations and downloadable TouchDesigner files, .toe files, which can be used to follow along. To get the TouchDesigner (.toe) files click on the button on the top of the article. In addition, a free noncommercial copy of TouchDesigner which is fully functional (except that the highest resolution has been limited to 1280 by 1280), is available.

Note: There are currently two types of Intel RealSense cameras, the short range F200, and the longer-range R200. The R200 with its tiny size is useful for live performances and installations where a hidden camera is desirable. Unlike the larger F200 model, the R200 does not have finger/hand tracking and doesn’t support "Marker Tracking." TouchDesigner supports both the F200 and the R200 Intel RealSense cameras.

To quote from the TouchDesigner web page, "TouchDesigner is revolutionary software platform which enables artists and designers to connect with their media in an open and freeform environment. Perfect for interactive multimedia projects that use video, audio, 3D, controller inputs, internet and database data, DMX lighting, environmental sensors, or basically anything you can imagine, TouchDesigner offers a high performance playground for blending these elements in infinitely customizable ways."

I asked Malcolm Bechard, senior developer at Derivative, to comment on using the Intel RealSense camera with TouchDesigner:

"Using TouchDesigner’s procedural node-based architecture, Intel RealSense camera data can be immediately brought in, visualized, and then connected to other nodes without spending any time coding. Ideas can be quickly prototyped and developed with an instant-feedback loop.Being a native node in TouchDesigner means there is no need to shutdown/recompile an application for each iteration of development.The Intel RealSense camera augments TouchDesigner capabilities by giving the users a large array of pre-made modules such as gesture, hand tracking, face tracking and image (depth) data, with which they can build interactions. There is no need to infer things such as gestures by analyzing the lower-level hand data; it’s already done for the user."

Using the Intel® RealSense™ Camera in TouchDesigner

TouchDesigner is a node-based platform/program that uses Python* as its main scripting language. There are five distinct categories of nodes that perform different operations and functions: TOP nodes (textures), SOP nodes (geometry), CHOP nodes (animation/audio data), DAT nodes (tables and text) and COMP nodes (3D Geometry nodes and nodes for building 2D control panels), and MAT nodes (materials). The programmers at TouchDesigner consulting with Intel programmers designed two special nodes: the Intel RealSense camera TOP node and the Intel RealSense camera CHOP node to integrate the Intel RealSense camera into the program.

Note: This article is aimed at those familiar with using TouchDesigner and its interface. If you are unfamiliar with TouchDesigner and plan to follow along with this article step-by-step, I recommend that you first review some of the documentation and videos available here:

Learning TouchDesigner

Note: When using the Intel RealSense camera, it is important to pay attention to its range for best results. On this Intel web page you will find the range of each camera and best operating practices for using it.

Intel RealSense Camera TOP Node

The TOP nodes in TouchDesigner perform many of the same operations found in a traditional compositing program. The Intel RealSense camera TOP node adds to these capabilities utilizing the 2D and 3D data feed that the Intel RealSense camera feeds into it. The Intel RealSense camera TOP node has a number of setup settings to acquire different forms of data.

  • Color. The video from the Intel RealSense camera color sensor.
  • Depth. A calculation of the depth of each pixel. 0 means the pixel is 0 meters from the camera, and 1 means the pixel is the maximum distance or more from the camera.
  • Raw depth. Values taken directly from the Intel® RealSense™ SDK. Once again 0 means 1 meter from the camera and 1 is the maximum range or more away from the camera.
  • Visualized depth. A gray-scale image from the Intel RealSense SDK that can help you visualize the depth. It cannot be used to actually determine a pixel’s exact distance from the camera.
  • Depth to color UV map. The UV values from a 32-bit floating RG texture (note, no blue) that are needed to remap the depth image to line up with the color image. You can use the Remap TOP node to align the images to match.
  • Color to depth UV map. The UV values from a 32-bit floating RG texture (note, no blue) that are needed to remap the color image to line up with the depth image. You can use the Remap TOP node to align the two.
  • Infrared. The raw video from the infrared sensor of the Intel RealSense camera.
  • Point cloud. Literally a cloud of points in 3D space (x, y, and z coordinates) or data points created by the scanner of the Intel RealSense camera.
  • Point cloud color UVs. Can be used to get each point’s color from the color image stream.

Note: You can download that toe file, RealSensePointCloudForArticle.toe, to use as a simple beginning template for creating a 3D animated geometry from the data of the Intel RealSense camera. This file can be modified and changed in many ways. Together, the three Intel RealSense camera TOP nodes—the Point Cloud, the Color, and the Point Cloud Color UVs—can create a 3D geometry composed of points (particles) with the color image mapped onto it. This creates many exciting possibilities.


Point Cloud Geometry. This is an animated geometry made using the Intel RealSense camera. This technique would be exciting to use in a live performance. The audio of the character speaking could be added as well. TouchDesigner can also use the data from audio to create real-time animations.

Intel RealSense Camera CHOP Node

Note: There is also an Intel RealSense camera CHOP node that controls the 3D tracking/position data that we will discuss in Part 2 of this article.

Demo 1: Using the Intel RealSense Camera TOP Node

Click on the button on top of the article to get the First TOP Demo: settingUpRealNode2b_FINAL.toe

Demo 1, part 1: You will learn how to set up the Intel RealSense camera TOP node and then connect it to other TOP nodes.

  1. Open the Add Operator/OP Create dialog window.
  2. Under the TOP section, click RealSense.
  3. On the Setup parameters page for the Intel RealSense camera TOP node, for Image select Color from the drop-down menu. In the Intel RealSense camera TOP node, the image of what the camera is pointing to shows up, just as in a video camera.
  4. Set the resolution of the Intel RealSense Camera to 1920 by 1080.
     


    The Intel RealSense camera TOP node is easy to set up.

  5. Create a Level TOP and connect it to the Intel RealSense camera TOP node.
  6. In the Pre parameters page of the Level TOP Node, choose Invert and slide the slider to 1.
  7. Connect The Level TOP node to an HSV To RGB TOP node and then connect that to a Null TOP node.


The Intel RealSense camera TOP node can be connected to other TOP nodes to create different looks and effects.

Next we will put this created image into the Phong MAT (Material) so we can texture geometries with it.

Using the Intel RealSense Camera Data to Create Textures for Geometries

Demo 1, part 2: This exercise shows you how to use the Intel RealSense camera TOP node to create textures and how to add them into a MAT node that can then be assigned to the geometry in your project.

  1. Add a Geometry (geo) COMP node into your scene.
  2. Add a Phong MAT node.
  3. Take the Null TOP node and drag it onto the Color Map parameter of your Phong MAT node.
     


    The Phong MAT using the Intel RealSense camera data for its Color Map parameter.

  4. On the Render parameter page of your Geo COMP for the Material parameter add type phong1 to make it use the phong1 node as its material.
     


    The Phong MAT using the Intel RealSense camera data for its Color Map added into the Render/Material parameter of the Geo COMP node.

Creating the Box SOP and Texturing it with the Just Created Phong Shader

Demo 1, part 3: You will learn how to assign the Phong MAT shader you created using the Intel RealSense camera data to a box Geometry SOP.

  1. Go into the geo1 node to its child level, (/project1/geo1).
  2. Create a Box SOP node, a Texture SOP node, and a Material SOP node.
  3. Delete the Torus SOP node that was there and connect the box1 node to the texture1 node and the material1 node.
  4. In the Material parameter of the material1 node enter: ../phong1 which will refer it to the phong1 MAT node you created in the parent level.
  5. To put the texture on each face of the box, in the parameters of the texture1 node, Texture/Texture Type, put face and set the Texture/Offset put .5 .5 .5.
     


    At the child level of the geo1 COMP node, the Box SOP node, the Texture SOP node, and the Material SOP node are connected. The Material SOP is now getting its texture info from the phong1 MAT node which is at the parent level. ( …/phong1).

Animating and Instancing the Box Geometry

Demo 1, part 4: You will learn how to rotate a Geometry SOP using the Transform SOP node and a simple expression. Then you will learn how to instance the Box geometry. We will end up with a screen full of rotating boxes with the textures from the Intel RealSense camera TOP node on them.

  1. To animate the box rotating on the x-axis, insert a Transform SOP node after the Texture SOP node.
  2. Put an expression into the x component (first field) of the Rotate parameter in the transform1 SOP node. This expression is not dependent on the frames so it will keep going and not start repeating when the frames on the timeline run out. I multiplied by 10 to increase the speed: absTime.seconds*10
     


    Here you can see how the cube is rotating.

  3. To make the boxes, go up to the parent level (/project1) and in the Instance page parameters of the geo1 COMP node, for Instancing change it to On.
  4. Add a Grid SOP node and a SOP to the DAT node.
  5. Set the grid parameters to 10 Rows and 10 Columns and the size to 20 and 20.
  6. In the SOP to DAT node parameters, for SOP put grid1 and make sure Extract is to set Points.
  7. In the Instance page parameters of the geo1 COMP, for Instance CHOP/DAT enter: sopto1.
  8. Fill in the TX, TY, and TZ parameters with P(0), P(1), and P(2) respectively to specify which columns from the sopto1 node to use for the instance positions.
     


    Click on the button on top of the article to download this .toe file to see what we have done so far in this first Intel RealSense camera TOP demo.

    TOP_Demo1_forArticle.toe

  9. If you prefer to see the image in the Intel RealSense camera unfiltered, disconnect or bypass the Level TOP node and the HSV to RGB TOP node.
     

Rendering or Performing the Animation Live

Demo 1, part 5: You will learn how to set up a scene to be rendered and either performed live or rendered out as a movie file.

  1. To render the project, add in a Camera COMP node, a Light COMP node, and a Render TOP node. By default the camera will render all the Geometry components in the scene.
  2. Translate your camera about 20 units back on the z-axis. Leave the light at the default setting.
  3. Set the resolution of the render to 1920 by 1080. By default the background of a render is transparent (alpha of 0).
  4. To make this an opaque black behind the squares, add in a Constant TOP node and change the Color to 0,0,0 so it is black while leaving the Alpha as 1. You can choose another color if you want.
  5. Add in an Over TOP node and connect the Render TOP node to the first hook up and the Constant TOP node to the second hook up. This makes the background pixels of the render (0, 0, 0, 1), which is no longer transparent.

Another way to change the alpha of a TOP to 1 is to use a Reorder TOP and set its Output Alpha parameter to Input 1 and One.


Shows the rendered scene with the background being set to opaque black.


Here you can see the screen full of the textured rotating cubes.

If you prefer to render out the animation instead of playing it in real time in a performance you must choose the Export movie Dialog box under file in the top bar of the TouchDesigner program. In the parameter for the TOP Video, enter null2 for this particular example. Otherwise enter any TOP node that you want to render.


Here is the Export Movie panel, and null2 has been pulled into it. If I had an audio CHOP to go along with it, I would pull or place that into the CHOP Audio slot directly under where I put null2.

Demo 1, part 6: One of the things that makes TouchDesigner a special platform is the ability to do real-time performance animations with it. This makes it especially good when paired with the Intel RealSense Camera.

  1. Add a Window COMP node and in the operator parameter enter your null2 TOP node.
  2. Set the resolution to 1920 by 1080.
  3. Choose the Monitor you want in the Location parameter. The Window COMP node lets you perform the entire animation in real time projected onto the monitor you choose. Using the Window COMP node you can specify the monitor or projector you want the performance to be played from.
     


    You can create as many Window COMP nodes as you need to direct the output to other monitors.

Demo 2: Using the Intel RealSense Camera TOP Node Depth Data

The Intel RealSense camera TOP node has a number of other settings that are useful for creating textures and animation.

In demo 2, we use the depth data to apply a blur on an image based on depth data from the camera. Click on the button on top of the article to get this file: RealSenseDepthBlur.toe

First, create an Intel RealSense camera TOP and set its Image parameter to Depth. The depth image has pixels that are 0 (black) if they are close to the camera and 1 (white) if they are far away from the camera. The range of the pixel values is controlled by the Max Depth parameter which is specified in Meters. By default it has a value of 5 which means pixels 5 or more meters from the camera will be white. A pixel with a value of 0.5 will be 2.5 meters from the camera. Depending on how far the camera is from you changing this value to something smaller may be good. For this example we’ve changed it to 1.5 meters.

Next we want to process the depth a bit to remove objects outside our range interest, which we will do using a Threshold TOP.

  1. Create a Threshold TOP and connect it to the realsense1 node. We want to cull out pixels that beyond a certain distance from the camera so set the Comparator parameter to Greater and set the Threshold parameter to 0.8. This makes pixels that are greater than 0.8 (which is 1.2 meters or greater if we have Max Depth in the Intel RealSense camera TOP set to 1.5), become 0 and all other pixels become 1.
     

  2. Create a Multiply TOP and connect the realsense1 node to the first input and the thresh1 node to the 2nd input. Multiplying the pixels we want by 1 will leave them as-is and others by 0 make them back. The multiply1 node now has only pixels greater than 0 for the part of the image you want to control the blur we will do next.
  3. Create a Movie File in TOP, and select a new image for its File parameter. In this example we select Metter2.jpg from the TouchDesigner Samples/Map directory.
  4. Create a Luma Blur TOP and connect moviefilein1 to the 1st input of lumablur1 and multiply1 to the 2nd input of lumablur1.
  5. In the parameters for lumablur1 set White Value to 0.4, Black Filter Width to 20, and White Filter Width to 1. This makes pixels where the first input is 0 have a blur filter width of 20 and a pixels with a value of 0.4 or greater have a blur width of 1.
     


    The whole layout.

The result is an image where the pixels where the user is located are not blurred while other pixels are blurry.


The background, by putting on the display of the Luma Blur TOP shows how the image is blurred.

Demo 3: Using the Intel RealSense Camera TOP Node Depth Data with the Remap TOP Node

Click on the button on the article top to get this file: RealSenseRemap.toe

Note: The depth and color cameras of the Intel RealSense camera TOP node are in different spots in the world so their resulting images by default do not line up. For example if your hand is positioned in the middle of the color image, it won’t be in the middle of the depth image, it will either be off to the left or right a bit. The UV remap fixes this by shifting the pixels around so they align on top of each other. Notice the difference between the aligned and unaligned TOPs.


The Remap TOP aligns the depth data from the Intel RealSense camera TOP with the color data from the Intel RealSense camera TOP, using the depth to color UV data, putting them in the same world space.

Demo 4: Using Point Cloud in the Intel RealSense Camera TOP Node

Click on the button on top of the article to get this file: PointCloudLimitEx.toe

In this exercise you learn how to create animated geometry using the Intel RealSense camera TOP node point Cloud setting and the Limit SOP node. Note that this technique is different than the Point Cloud example file shown at the beginning of this article. The previous example uses GLSL shaders, which results in the ability to generate far more points, but it is more complex to do and out of the scope of this article.

  1. Create a RealSense TOP node and set the parameter Image to Point Cloud.
  2. Create a TOP to CHOP node and connect it to a Select CHOP node.
  3. Connect the Select CHOP node to a Math CHOP node.
  4. In the topto1 CHOP node parameter, TOP, enter: realsense1.
  5. In the Select CHOP node parameters, Channel Names, enter r g b leaving a space between the letters.
  6. In the math1 CHOP node for the Multiply parameter, enter: 4.2.
  7. On the Range parameters page, for To Range, enter: 1 and 7.
  8. Create a Limit SOP node.

To quote from the information on the www.derivative.ca online wiki page, "The Limit SOP creates geometry from samples fed to it by CHOPs. It creates geometry at every point in the sample. Different types of geometry can be created using the Output Type parameter on the Channels Page."

  1. In the limit1 CHOP Channels parameters page, enter r in the X Channel, g in the Y Channel, and b in the Z Channel.
     

    Note: Switching the r g and b to different X Y or Z channels changes the geometry being generated. So you might want to try this later: In the Output parameter page, for Output Type select Sphere at Each Point from the drop-down. Create a SOP to DAT node. In the parameters page, for SOP put in limit1 or drag your limit1 CHOP into the parameter. Keep the default setting of Points in the Extract parameter. Create a Render TOP node, a Camera COMP node, and a Light COMP node. Create a Reorder TOP and make Output Alpha be Input 1 and One and connect it to the Render TOP.


    As the image in the Intel RealSense camera changes, so does the geometry. This is the final layout.


    Final images in the Over TOP CHOP node. By changing the order of the channels in the Limit TOP parameters you change the geometry which is based on the point cloud.

In Part 2 of this article we will discuss the Intel RealSense camera CHOP and how to create content both rendered and in real-time for performances, Full Dome shows, and VR. We will also show how to use the Oculus Rift CHOP node.

About the Author

Audri Phillips is a visualist/3d animator based out of Los Angeles, with a wide range of experience that includes over 25 years working in the visual effects/entertainment industry in studios such as Sony, Rhythm and Hues, Digital Domain, Disney, and Dreamworks feature animation. Starting out as a painter she was quickly drawn to time based art. Always interested in using new tools she has been a pioneer of using computer animation/art in experimental film work including immersive performances. Now she has taken her talents into the creation of VR. Samsung recently curated her work into their new Gear Indie Milk VR channel.

Her latest immersive work/animations include: Multi Media Animations for "Implosion a Dance Festival" 2015 at the Los Angeles Theater Center, 3 Full dome Concerts in the Vortex Immersion dome, one with the well-known composer/musician Steve Roach. She has a fourth upcoming fulldome concert, "Relentless Universe", on November 7th, 2015. She also created animated content for the dome show for the TV series, “Constantine” shown at the 2014 Comic-Con convention. Several of her Fulldome pieces, “Migrations” and “Relentless Beauty”, have been juried into "Currents", The Santa Fe International New Media Festival, and Jena FullDome Festival in Germany. She exhibits in the Young Projects gallery in Los Angeles.

She writes online content and a blog for Intel. Audri is an Adjunct professor at Woodbury University, a founding member and leader of the Los Angeles Abstract Film Group, founder of the Hybrid Reality Studio (dedicated to creating VR content), a board member of the Iota Center, and she is also an exhibiting member of the LA Art Lab. In 2011 Audri became a resident artist of Vortex Immersion Media and the c3: CreateLAB.


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>