Download Demo Files ZIP 14KB
The Intel® RealSense™ camera is a vital tool for creating VR and AR projects. Part 2 of this article will lay out how to use the Intel RealSense camera nodes in TouchDesigner to set up a render or real-time projections for multi screens, single screen, 180 degree (FullDome) and 360 degree VR renders. In addition the Intel RealSense™ camera information can be sent out to an Oculus Rift* through the TouchDesigner Oculus Rift TOP node.
Part 2 will focus on the RealSense CHOP node in TouchDesigner.
The RealSense CHOP node in TouchDesigner is where the powerful tracking features of the RealSense F200 and R200 cameras, such as eye tracking, finger tracking, and face tracking can be accessed. These tracking features are especially exciting for use in setting up real time animations and/or tracking these animations to body/gesture movements of performers. I find this particularly useful for live performances of dancers or musicians where I want a high level of interactivity between live video, animations, graphics and sound as well as the performers.
To get the TouchDesigner (.toe)
files that go with this article, click the button at the top of the page. A free non-commercial copy of TouchDesigner is available too and is fully functional except that the highest resolution is limited to 1280 x 1280.
Once again it is worth noting that the support of the Intel RealSense camera in TouchDesigner makes it an even more versatile and powerful tool.
Note: Like Part 1 of this article, Part 2 is aimed at those familiar with using TouchDesigner and its interface. If you are unfamiliar with TouchDesigner and plan to follow along with this article step-by-step, I recommend that you first review some of the documentation and videos available here: Learning TouchDesigner.
Note: When using the Intel RealSense camera, it is important to pay attention to its range for best results. On this Intel web page you will find the range of the camera and best operating practices for using it.
A Bit of Historical Background
All of the data the Intel RealSense cameras can provide is extremely useful for creating VR and AR. Some early attempts at what the Intel RealSense camera can now do took place in the 1980s. Hand positioning tracking technology came out in the 1980s in the form of the data glove developed by Jason Lanier and Thomas G. Zimmerman, and in 1987 Nintendo came out with the first wired glove available to consumers for gaming in their Nintendo entertainment system.
Historically the Intel RealSense camera also has roots in performance animation, which uses motion capture technologies to translate a live motion event into usable math, thus translating a live performance into digital information that can be used for a performance. Motion capture was used as early as the 1970s in research projects at various universities and in the military for training. One of the first animations turned out utilizing motion capture data to create an animated performance was “Sexy Robot” https://www.youtube.com/watch?v=eedXpclrKCc in 1985 by Robert Abel and Associates. Sexy Robot used several techniques for getting in the information to create the digital robot model and then to animate it. First, a practical model was made of the robot. It was measured in all dimensions and the information to describe it was input in numbers, something the RealSense camera can now get from scanning the object. Then for the motion in Sexy Robot, dots were painted on a real person and used to make skeleton drawings on the computer creating a vector animation that was then used to animate the digital model. The RealSense camera is a big improvement on this with its infrared camera and an infrared laser projector, which provides the data from which digital models can be made as well as providing the data for tracking motion. The tracking capabilities of the Intel RealSense camera are very refined, making even eye tracking possible.
About the Intel RealSense Cameras
There are currently two types of Intel RealSense cameras that perform many of the same functions with slight variations: The Intel RealSense camera F200, for which the exercises in this article are designed, and the Intel RealSense camera R200.
The Intel RealSense R200 camera with its tiny size has many advantages as it is designed to mount on a tripod or be placed on the back of a tablet. Thus, the camera is not focused on the user but on the world, and with its increased scanning capabilities, it can scan over a larger area. It also has advanced depth-measuring capabilities. The camera’s use will be exciting for Augmented Reality (AR) as it has a feature called Scene Perception, which will enable you to add virtual objects into a captured world scene. Virtual information will also be able to be laid over a live image feed. Unlike the F200 model, the R200 does not have finger and hand tracking and doesn’t support face tracking. TouchDesigner supports both the F200 and the R200 Intel RealSense cameras.
About the Intel RealSense Cameras In TouchDesigner
TouchDesigner is a perfect match with the Intel RealSense camera, which allows a direct interface between the gestures of the user’s face and hands and the software interface. TouchDesigner can directly use this position/tracking data. TouchDesigner can also use the depth, color, and infrared data that the Intel RealSense camera supplies. The Intel RealSense cameras are very small and light, especially the R200 model, which can easily be placed near performers and not be noticed by audience members.
Adam Berg, a research engineer for Leviathan who is working on a project using the Intel RealSense camera in conjunction with TouchDesigner to create interactive installations says: “The small size and uncomplicated design of the camera is well-suited to interactive installations. The lack of an external power supply simplifies the infrastructure requirements, and the small camera is discreet. We've been pleased with the fairly low latency of the depth image as well. TouchDesigner is a great platform to work with, from first prototype to final development. Its built-in support for live cameras, high-performance media playback, and easy shader development made it especially well-suited for this project. And of course the support is fantastic.”
Using the Intel® RealSense™ Camera in TouchDesigner
In Part 2 we focus on the CHOP node in TouchDesigner for the Intel RealSense camera.
RealSense CHOP Node
The RealSense CHOP node controls the 3D tracking/position data. The CHOP node carries two types of information: (1) The real-world position, which is meters units but potentially accurate down to the millimeter, is used for the x, y, and z translations. The x, y and z rotations in the RealSense CHOP are output at x, y and z Euler angles in degrees. (2) The RealSense CHOP also takes pixels from image inputs and converts that to normalized UV coordinates. This is useful for image tracking.
The RealSense CHOP node has two setup settings: finger/face tracking and marker tracking.
- Finger/Face Tracking gives you a list of selections to track. You can narrow down the list of what is trackable to one aspect, then by connecting a Select CHOP node to the RealSense CHOP node you can narrow down the selection even further so that you may only be tracking the movement of an eyebrow or an eye.
- Marker tracking enables you to load an image and track that item wherever it is.
Using the RealSense CHOP node in TouchDesigner
Demo #1 Using Tracking
This is a simple first demo of the RealSense CHOP node to show you how it can be wired/connected to other nodes and used to track and create movement. Once again, please note these demos require a very basic knowledge of TouchDesigner. If you are unfamiliar with TouchDesigner and plan to follow along with this article step-by-step, I recommend that you first review some of the documentation and videos available here: Learning TouchDesigner
- Create the nodes you will need and arrange them in a horizontal row in this order: Geo COMP node, the RealSense CHOP node, the Select CHOP node, the Math CHOP node, the Lag CHOP node, the Out CHOP node, and the Trail CHOP node.
- Wire the RealSense CHOP node to the Select CHOP node, the Select CHOP node to the Math CHOP node, the Math CHOP node to the Lag CHOP node, the Lag CHOP node to the Out CHOP Node, and the Out CHOP node to the Trail CHOP node.
- Open the Setup parameters page of the RealSense CHOP node, and make sure the Hands World Position parameter is On. This outputs positions of the tracked hand joints in world space. Values are given in meters relative to the camera.
- In the Select parameters page of the Select CHOP Node, set the Channel Names parameter to
hand_r/wrist:tx
by selecting it from the tracking selections available using the drop-down arrow on the right of the parameter. - In the Rename From parameter, enter:
hand_r/wrist:tx
, and then in the Rename To parameter, enter: x. - In the Range/To Range parameter of the Math CHOP node, enter: 0, 100. For a smaller range of movement range, enter a number less than 100.
- Select the Geometry COMP and make sure it is on its Xform parameters page. Press the + button on the bottom right of the Out CHOP node to activate its viewer. Drag the X channel onto the Translate X parameter of the Geometry COMP node and select Export CHOP from the drop-down menu that will appear.
To render geometry, you need a Camera COMP node, a Material (MAT) node (I used the Wireframe MAT), a Light COMP node, and a Render TOP node. Add these to render this project.
- In the Camera COMP, on the Xform parameter page set the Translate Z to 10. This gives you a better view of the movement in the geometry you have created as the camera is further back on the z-axis.
- Wave your right wrist back and forth in front of the camera and watch the geometry move in the Render TOP node.
Demo #2: RealSense CHOP Marker Tracking
In this demo, we use the marker tracking feature in the RealSense CHOP to show how to use an image for tracking. You will create an image and have two copies of it: a printed copy and a digital copy. They should exactly match. You can either have a digital file, print a hard copy, or you can scan an image to create the digital version.
- Add a RealSense CHOP node to your scene.
- On the Setup parameters page for the RealSense CHOP node, for Mode select Marker Tracking.
- Create a Movie File in TOP.
- In the Play parameters page of the TOP node, under File, choose and load in the digital image that you also have a printed version of.
- Drag the Movie File in TOP to the RealSense CHOP node Setup parameters page and into the Marker Image TOP slot at the bottom of the page.
- Create a Geometry COMP, a Camera COMP, a Light COMP and a Render TOP.
- Like we did in step 7 of Demo #1, export the tx channel from the RealSense CHOP and drag it to the Translate X parameter of the Geometry COMP.
- Create a Reorder TOP and connect it to the Render TOP. In the Reorder parameters page in the Output Alpha change the drop-down to One.
- Position your printed image of the digital file in front of the Intel RealSense Camera and move it. The camera should track the movement and reflect it in the Render TOP. The numbers in the RealSense CHOP will also change.
Eye Tracking in TouchDesigner Using the RealSense CHOP Node
In the TouchDesigner Program Palette, under RealSense there is a template called eyeTracking that can be used to track a person’s eye movements. This template uses the RealSense CHOP node finger/face tracking and the RealSense TOP node set to Color. In the template, green WireFrame rectangles track to the person’s eyes and are then composited over the RealSense TOP color image of the person. Any other geometry or particles etc. could be used instead of the green open rectangles. It is a great template to use. Here is an image using the template.
Demo #3, Part 1: Simple ways to set up a FullDome render or a VR render
In this demo we take a file and show how to render it as a 180 FullDome render and as a 360 VR render. I have already made the file to download to see how it is done in detail. It is chopRealSense_FullDome_VR_render.toe
A brief description of how this file was created:
In this file I wanted to place geometries (sphere, torus, tubes, and rectangles) in the scene. So I made a number of SOP nodes of these different geometrical shapes. Each SOP node was attached to a Transform SOP node to move (translate) the geometries to different places in the scene. All the SOP nodes were wired to one Merge SOP node. The Merge SOP node was fed into the Geometry COMP.
Next I created a Grid SOP node and a SOP To DAT node. The SOP To DAT node was used to instance the Geometry COMP so that I had more geometries in the scene. I also created a Constant MAT node, made the color green, and turned on the WireFrame parameter on the Common page.
Next I created a RealSense CHOP node and wired it to the Select CHOP node where I selected the hand_r/wrist:tx
channel to track and renamed it to x. I wired the Select CHOP to the Math CHOP so I could change the range and wired the Math CHOP to the Null CHOP. It is always good practice to end a chain with a Null or Out node so you can more easily insert new filters inside the chain. Next I exported the x Channel from the Null CHOP into the Scale X parameter of the Geometry COMP. This controls all of the x scaling of the geometry in my scene when I moved my right wrist in front of the Intel RealSense Camera.
To create a FullDome 180-degree render from the file :
- Create a Render TOP, a Camera COMP, and a Light COMP.
- In the Render TOPs Render parameters page, select Cube Map in the Render Mode drop-down menu.
- In the Render TOP Common parameters page, set the Resolution to a 1:1 aspect ratio such as 4096 by 4096 for a 4k render.
- Create a Projection TOP node and connect the Render TOP node to it.
- In the Projection TOP Projection parameters page, select Fish-Eye from the Output drop-down menu.
- (This is optional to give your file a black background.) Create a Reorder TOP and in the Reorder parameters page in the right drop-down menu for Output Alpha, select One.
- You are now ready to either perform the animation live or export a movie file. Refer to Part 1 of this article for instructions. You are creating a circular fish-eye dome master animation. It will be a circle within a square.
For an alternative method, go back to Step 2 and instead of selecting Cube Map in the Render Mode drop-down menu, select Fish-Eye(180). Continue with Step 3 and optionally Step 6, and you are now ready to perform live or export a dome Master animation.
To create a 360-degree VR render from this file:
- Create a Render TOP, a Camera COMP, and a Light COMP.
- In the Render TOP’s Render parameters page, select Cube Map in the Render Mode drop-down menu.
- In the Render TOP Common parameters page, set the Resolution to a 1:1 aspect ratio such as 4096 by 4096 for a 4k render.
- Create a Projection TOP node, and connect the Render TOP node to it.
- In the Projection TOP Projection Parameters page, select Equirectangular from the Output drop-down menu. It will automatically make the aspect ratio 2:1.
- (This is optional to give your file a black background.) Create a Reorder TOP, and in the Reorder parameters page in the right drop-down menu for Output Alpha, select One.
- You are now ready to either perform the animation live or export out a movie file. Refer to Part 1 of this article for instructions. If you export a movie render, you are creating a 2:1 aspect ratio equirectangular animation for viewing in VR headsets.
To Output to an Oculus Rift* from TouchDesigner While Using the Intel RealSense Camera
TouchDesigner has created several templates for download that will show you how to set up the Oculus Rift in TouchDesigner, one of which you can download using the button on the top right of this article, OculusRiftSimple.toe. You do need to have your computer connected to an Oculus Rift to see it in the Oculus Rift. Without an Oculus Rift you can create the file and see the images in the LeftEye Render TOP and the RightEye Render TOP and display them in the background of your scene. I added the Oculus Rift capabilities to the file I used in Demo 3. In this way I have the Intel RealSense Camera animating what I am seeing in the Oculus Rift.
About the Author
Audri Phillips is a visualist/3d animator based out of Los Angeles, with a wide range of experience that includes over 25 years working in the visual effects/entertainment industry in studios such as Sony*, Rhythm and Hues*, Digital Domain*, Disney*, and Dreamworks* feature animation. Starting out as a painter she was quickly drawn to time based art. Always interested in using new tools she has been a pioneer of using computer animation/art in experimental film work including immersive performances. Now she has taken her talents into the creation of VR. Samsung* recently curated her work into their new Gear Indie Milk VR channel.
Her latest immersive work/animations include: Multi Media Animations for "Implosion a Dance Festival" 2015 at the Los Angeles Theater Center, 3 Full dome Concerts in the Vortex Immersion dome, one with the well-known composer/musician Steve Roach. She has a fourth upcoming fulldome concert, "Relentless Universe", on November 7th, 2015. She also created animated content for the dome show for the TV series, "Constantine*" shown at the 2014 Comic-Con convention. Several of her Fulldome pieces, "Migrations" and "Relentless Beauty", have been juried into "Currents", The Santa Fe International New Media Festival, and Jena FullDome Festival in Germany. She exhibits in the Young Projects gallery in Los Angeles.
She writes online content and a blog for Intel®. Audri is an Adjunct professor at Woodbury University, a founding member and leader of the Los Angeles Abstract Film Group, founder of the Hybrid Reality Studio (dedicated to creating VR content), a board member of the Iota Center, and she is also an exhibiting member of the LA Art Lab. In 2011 Audri became a resident artist of Vortex Immersion Media and the c3: CreateLAB. Works of hers can be found on Vimeo , on creativeVJ and on Vrideo .