Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Using LibRealSense and PCL to Create Point Cloud Data

$
0
0

Download Source Code 

Table of Contents

Introduction
Assumptions
Software Requirements
Supported Cameras
Setting Up the Qt Creator Project
   CMake contents
The main.cpp Source Code File Contents
   Source code explained
      Overview
      main(...)
      createCloudPointViewer(...)
      printRSContextInfo(...)
      configureRSStreams(...)
      generatePointCloud(...)
Wrap Up

Introduction

In this article I will show you how to use LibRealSense and PCL to generate point cloud data and display that data in the PCL Viewer. This article assumes you have already downloaded, installed both LibRealSense and PCL, and have them set up properly in Ubuntu*. In this article I will be on Ubuntu 16.04 using the Qt Creator* IDE. I will be using CMake*, and as such, you really can use whichever IDE you prefer.

Assumptions

In this article I make the following assumptions that the reader:

  1. Is somewhat familiar with using Qt Creator. The reader should know how to open Qt Creator and create a brand new empty C++ project.
  2. Is familiar with C++.
  3. Knows how to get around Linux*.
  4. Knows what GitHub is and knows how to at least download a project from a GitHub repository.

In the end, you will have a nice starting point where you use this code base to create your own LibRealSense/PCL applications.

Software Requirements

Supported Cameras

  • Intel® RealSense™ camera R200

Setting Up the Qt Creator* Project

As mentioned, I’m going to assume that the reader already is familiar with opening up Qt Creator and creating a brand new empty C++ project. Really, the only thing you need to do if you wish to follow along with exactly how I created my project is to ensure you choose the CMake toolset.

CMake contents

I chose CMake because I’m creating a generic non-Qt project. An explanation of CMake is beyond the scope of this document. Just note that it’s including the libraries and headers for PCL and LibRealSense. After you have created a Plain C++ Application in Qt, you can open up the CMakeLists.txt file that gets generated for you and replace its contents with this:

project(LRS_PCL)
cmake_minimum_required(VERSION 2.8)
aux_source_directory(. SRC_LIST)
add_executable(${PROJECT_NAME} ${SRC_LIST})
set(CMAKE_PREFIX_PATH $ENV{HOME}/Qt/5.5/gcc_64)

find_package( PCL 1.7 REQUIRED )
find_package( Qt5 REQUIRED COMPONENTS Widgets Core )

include( CheckCXXCompilerFlag )
set( CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11" )

include_directories( ${PCL_INCLUDE_DIRS} /user/local/include )
link_directories( ${PCL_LIBRARY_DIRS} /usr/local/lib /usr/lib/x86_64-linux-gnu )
add_definitions( ${PCL_DEFINITIONS} )

target_link_libraries( LRS_PCL ${PCL_LIBRARIES} /usr/local/lib/librealsense.so)

The main.cpp Source Code File Contents

Here is the source code for the example application.

Source code explained

Overview

The structure is pretty simplistic. It’s a one-source code file containing everything we need for the sample. We have our header includes at the top. Because this is a sample application, we are not going to worry too much about “best practices” in defensive software engineering. Yes, we could have better error checking; however, the goal here is to make this sample application as easy to read and comprehend as possible.

main(…)

Obviously, as the name implies, this is the main function. Even though we are accepting command line parameters we are not using them.

We start by creating a few variables that get used in main(). The frame and fps variables are there to help calculate the frames per second and how long the apps have been running. Next, we create a pointer to a PCL PointXYZRGB object along with our own safe pointer to a PCLVisualizer.

At this point, we can generate the actual PCLVisualizer object. We do this in the function createPointCloudViewer by passing in the PointXYZRGP pointer, rsCloudPtr. This will return to us a newly created PCLVisualizer object that will display the point cloud generated from the Intel RealSense camera R200 data.

Now that we have the required PCL pieces created, we will create the RealSense functionality by starting with the RS context. We don’t NEED to print this context out, but I’m showing you how you can get the device count from the context, and if there is no device, how to take appropriate actions.

Next, we need to set up the Intel RealSense camera streams, which is done in the configureRSStreams function. Notice that we are sending in an rs::device, this is the Intel RealSense camera R200 device. Once PCL and the RealSense functionality have been set up and the camera is running, we can now start processing the data.

As mentioned above, the frame variables are for displaying frame rates and how may frames have been generated during the applications running cycle. Those are used next as we then call printTimeLoop to display the fps feedback to the user.

The next mentionable process is calling generatePointCloud, passing in the Intel RealSense camera object and the pointer to the PointXYZRGB structure that holds the point cloud data. Once the point cloud has been generated, we can then display the cloud. This is done by calling the visualizer’s updatePointCloud method followed up by calling spinOnce. I’m telling spinOnce to run for 1 millisecond.

The loop ends and it starts over. Upon exit, we tell the PCLVisualizer to close itself and return out of main.

At this point, the app will then quit.

createCloudPointViewer(…)

This function sets up the PCLVisualizer object. We pass it a pointer to the PointXYZRGB structure, which will hold the cloud data.

The first thing we do is create a new PCLVisualizer object named viewer. From there we set its background color, add the PointXYZRGB cloud structure, and set its rendering properties to a visualization point of 1. For fun, try changing this value to see how it affects the rendering of the cloud. For example, change 1 to 5. (I changed it back to 1.)

Next, we need to specify a coordinate system via the addCoordinateSystem. This causes the XYZ bars to be drawn in the visualizer. If you don’t want to see them, you can comment this function out. I wish I could tell you what initCameraParamers is doing, but… to be honest, I can’t. Even the PCL documentation doesn’t have a solid explanation. If you can give me a detailed explanation, I’d appreciate it. The best I’ve found is from a point cloud tutorial on the visualizer stating…

This final call sets up some handy camera parameters to make things look nice.

So, there you have it, right from PCL. ;) I won’t give them grief. I know it’s difficult to fully understand the ins and outs of every line of code. I know the pain.

printRSContextInfo(…)

This is very straightforward function. We are simply displaying the number of Intel RealSense devices, then ensuring there are more than 0. If not, we know that LibRealSense was not able to detect the Intel RealSense camera R200 and throw an error if one was not found; otherwise, the function will return success as true.

configureRSStreams(…)

This is where we are going to set up the Intel RealSense camera R200 for streaming. I show you how to get the camera’s name by displaying in a std::cout statement. This does not get displayed in the visualizer, just to an application output window in an IDE.

Next, two RealSense streams get enabled. —a color stream and depth stream—specifying that we want the best quality the camera can produce.

Once the streams are set up, we get the camera running by calling the camera’s start () function.

generatePointCloud(…)

Please note that this portion of the source code is derived from the LibRealSense sample cpp-tutorial-pointcloud.cpp; it demonstrates getting the above-mentioned color and depth data and creating unstructured OpenGL vertices as a point cloud. Knowledge of 3D math and geometry is assumed for the following explanation.

The function generatePointCloud() is at the heart of the sample as it demonstrates getting depth and color data, correlating them, and getting them into a PointXYZRGB point cloud structure in PCL.

The first thing that happens is that the pointers to the depth and color images are set, followed by getting the various metadata about the images from the intrinsics and extrinsics camera parameters. The point cloud object is then cleared and resized. This sets us up so we can loop through all the columns and rows of the depth and color images.

This algorithm will iterate over columns first, then by row. The outer-loop by dy and the inner-loop by dx.

Inside the inner-loop, a value for an iterator i is computed and linearizing the address space for depth_image[]. An individual depth_value is extracted by applying depth_image[i] as a pointer look-up for the value.

A real-valued (float) representation of the depth_pixel is constructed by considering both dx and dy as real values. This is because we are idealizing the 2D coordinate system of the image space as a continuous 2D function for purposes of computation. A depth_in_meters is computed from the above-mentioned depth_value and a previously extracted scale value.

A depth_point is computed by applying the deprojection function call. The Deprojection function takes as input a 2D pixel location on a stream's images, as well as a depth specified in meters, and maps it to a 3D point location within the stream's associated 3D coordinate space. It converts an image space point into a corresponding point in 3D world space. Once a 3D depth point is calculated, a transform to the corresponding color values for the 3D point is applied. Given this color point in 3D space, it computes the corresponding pixel coordinates in an image produced by the same camera. Finally, a projection of the 3D color point back into 2D image coordinates is performed for eventual rejection tests against image width and height bounds. When done, we set up logic to remove failure points for the intrinsic width and height bounds as well as for those that fail the NOISY flag.

Input of data into the PCL point cloud structure begins with a setup of the pointers to the xyz and TGB members to the point cloud object rs_cloud_ptr->points[i]. An adjustment to the point cloud coordinates is introduced to flip the coordinate system for the viewer into a Y-up, right-handed coordinate system.

Additional variables are introduced to enable arbitrary adjustments of the color data; for example, if tone or color needs adjusting due to camera color offsets.

At this stage of the function, the evaluation of the point cloud continues through an assessment of whether the point is valid or not. If it is not valid, the point is skipped and the iteration continues. However, if it is valid then the point is added to the point cloud as a PointXYZRGB point.

This process continues for each of the depth and color points in the image and both in the inner- and outer-loops of the function are repeated per frame.

Wrap up

In this article, I’ve attempted to show how you can get data from an Intel RealSense camera using the LibRealSense open source library, use that data, send it to PCL to generate point cloud data, and display it in the PCLViewer.


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>