Intro
Internet of Things (IoT) is enabling our lives in new and interesting ways, but with that comes the challenge of analyzing and bringing meaning to the stream of continuously generated data. One IoT trend in the home is the use of multiple security cameras for monitoring purposes, resulting in large amounts of data generated from images and video. For example, one house with twelve cameras taking 180,000 images per day can easily generate 5 GB of data. These large amounts of data make manual analysis impractical. Some cameras have built-in motion sensors to only take images when change is detected, and while this helps to reduce the data, light changes and other insignificant movement will still be picked up and have to be sorted through. To monitor the home for what is wanted, OpenCV* presents a promising solution. For the purposes of this paper it is people and faces. OpenCV already has a number of pre-defined algorithms to search images for faces, people, and objects and can also be trained to recognize new ones.
This article is a proof of concept to explore quickly prototyping an analytics solution at the edge using Intel® IoT Gateway computing power to create a Smarter Security Camera.
Figure 1. Analyzed image from webcam with OpenCV* detection markers
Set-up
It starts with a Logitech* C270 Webcam with HD 720P resolution and 2.4 GHz Intel® Core™ 2 Duo processor. The webcam plugs into the USB port of Intel® Edison development board which turns it into an IP webcam streaming video to a website. Using the webcam with the Intel® Edison development board allows for the camera “sensor” to be easily propagated to different locations around a home. The Intel® IoT Gateway captures images from the stream and uses OpenCV to analyze them. If the algorithms detect that there is a face or a person in view, it uploads the image to Twitter*.
Figure 2. Intel® Edison board and Webcam setup
Figure 3. Intel® IoT Gateway Device
Capturing the image
The webcam must be USB video class (UVC) compliant to ensure that it is compatible with the Intel® Edison USB drivers. In this case a Logitech C270 Webcam is used. For a list of UVC compliant devices, go here: http://www.ideasonboard.org/uvc/#devices. To use the USB slot, the micro switch on Intel® Edison development board must be toggled up towards the USB slot. Note that this will disable the micro-USB below to it and disable Ethernet, power (the external power supply must be plugged in now instead of using the micro-USB slot as a power source), and Arduino* sketch uploads. Connect the Intel® Edison development board to the Gateway’s Wi-Fi* hotspot to ensure it can see the webcam.
To check that the USB webcam is working, type the following into a serial connection.
ls -l /dev/video0
A line similar to this one should appear:
crw-rw---- 1 root video 81, 0 May 6 22:36 /dev/video0
Otherwise, this line will appear indicating the camera is not found.
ls: cannot access /dev/video0: No such file or directory
In the early stages of the project, the Intel® Edison development board was using the FFMEG library to capture an image and then send it over MQTT to the Gateway. This method has drawbacks as each image takes a few seconds to be saved which is too slow for practical application. To resolve this problem and make images ready to the Gateway on-demand, the setup switched to have the Intel® Edison development board continuously stream a feed that the Gateway could capture from at any time. This was accomplished using the mjpeg-streamer library. To install it on the Intel® Edison development board, add the following lines to base-feeds.conf with the following command:
echo "src/gz all http://repo.opkg.net/edison/repo/all src/gz edison http://repo.opkg.net/edison/repo/edison src/gz core2-32 http://repo.opkg.net/edison/repo/core2-32">> /etc/opkg/base-feeds.conf
Update the repository index:
opkg update
And install:
opkg install mjpg-streamer
To the start the stream:
mjpg_streamer -i "input_uvc.so -n -f 30 -r 800x600" -o "output_http.so -p 8080 -w ./www"
MJEG compressed format is used to keep the frame rate high for this project. However, YUV format is uncompressed which leaves more detail for OpenCV. Experiment with the tradeoffs to see which one fits best.
To view the stream while on the same Wi-Fi network, visit: http://localhost:8080/?action=stream, a still image of the feed can also be viewed by going to: http://localhost:8080/?action=snapshot. Change localhost to the IP address of Intel® Edison development board that should be connected to the Gateway’s Wi-Fi. The Intel® IoT Gateway sends an http request to the snapshot address and then saves the image to disk.
Gateway
The brains of the whole security camera is on the Gateway. OpenCV was installed into a virtual Python* environment to create a clean and segmented environment for OpenCV and not interfere with the Gateway’s Python version and packages. Basic install instructions for OpenCV linux can be found here: http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html. These instructions need to be modified in order to install OpenCV and its dependencies on the Intel® Wind River* Gateway.
GCC, Git, and python2.7-dev are already installed.
Install CMake 2.6 or higher:
wget http://www.cmake.org/files/v3.2/cmake-3.2.2.tar.gz tar xf cmake-3.2.2.tar.gz cd cmake-3.2.2 ./configure make make install
As the Wind River Linux* environment has no apt-get command, it can be a challenge to install the needed development packages. A workaround for this is to first install them on another 64-bit Linux* machine (running Ubuntu* in this case) and then manually copy the files to the Gateway. The full file list can be found on the Ubuntu site here: http://packages.ubuntu.com/. For example, for the libtiff4-dev package, files in /usr/include/<file> should go to the same location on the Gateway and files in /usr/lib/x86_64-linux-gnu/<file> should got into /usr/lib/<file>. The full list of files can be found here: http://packages.ubuntu.com/precise/amd64/libtiff4-dev/filelist.
Install and copy the files over for packages listed below.
sudo apt-get install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev sudo apt-get install libjpeg8-dev libpng12-dev libtiff4-dev libjasper-dev libv4l-dev
Install pip, this will help install a number of other dependencies.
wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
Install the virutalenv, this will create a separate environment for OpenCV.
pip install virtualenv virtualenvwrapper
Once the virtualenv has been installed, create one called “cv.”
export WORKON_HOME=$HOME/.virtualenvs mkvirtualenv cv
Note that all the following steps are done while the “cv” environment is activated. Once “cv” has been created, it will activate the environment automatically in the current session. This can be seen in the command prompt at the beginning eg. (cv) root@WR-IDP-NAME. For future sessions it can be activated with the following command:
. ~/.virtualenvs/cv/bin/activate
And similarly be deactivated using this command (do not deactivate it yet):
deactivate
Install numpy:
pip install numpy
Get the OpenCV Source Code:
cd ~ git clone https://github.com/Itseez/opencv.git cd opencv git checkout 3.0.0
And make it:
mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON \ -D PYTHON_INCLUDE_DIR=/usr/include/python2.7/ \ -D PYTHON_INCLUDE_DIR2=/usr/include/python2.7 \ -D PYTHON_LIBRARY=/usr/lib64/libpython2.7.so \ -D PYTHON_PACKAGES_PATH=/usr/lib64/python2.7/site-packages/ \ -D BUILD_NEW_PYTHON_SUPPORT=ON \ -D PYTHON2_LIBRARY=/usr/lib64/libpython2.7.so \ -D BUILD_opencv_python3=OFF \ -D BUILD_opencv_python2=ON ..
If the cv2.so file is not created, make OpenCV on the host Linux machine as well and copy the file over to /usr/lib64/python2.7/site-packages.
Figure 4. Webcam capture of people outside with OpenCV detection markers
To quickly create a program and connect a large number of capabilities and services together as with this project, Node-RED* was used. Node-RED is a quick prototyping tool that allows the user to visually wire together hardware devices, APIs, and various services. It also comes pre-installed on the Gateway. Make sure to update to the latest version.
Figure 5. Node-RED Flow
Once a message is injected in at the “Start” node (by clicking on it), the script will loop continuously after processing the image or encountering an error. A few nodes of note for the setup are the http request, the python script, and the function message for the tweet. The “Repeat” node is to visually simplify the repeat flow into one node instead of pointing all three flows back to the beginning.
The “http request” node sends a GET message to the IP webcam’s snapshot URL. If it is successful, the flow saves the image. Otherwise, it tweets an error message about the webcam.
Figure 6: Node-RED http GET request node details
To run the python script, create an “exec” node from the advanced section with the command “/root/.virtualenvs/cv/bin/python2.7 /root/PeopleDetection.py”. This allows the script to run in the virtual python environment where OpenCV is installed.
Figure 7: Node-RED exec node details
The python script itself is fairly simple. It checks the image for people using the HOG algorithm and then looks for faces using the haarcasade frontal face alt algorithm that comes installed with OpenCV. It also saves an image with boxes drawn around found people and faces. The code provided below is not optimized for our proof of concept beyond the optional scaling the image down before analyzing it and tweaking some of the algorithm inputs to suit our purposes. It takes the Gateway approximately 0.33 seconds to process an image. In comparison, the Intel® Edison module takes around 10 seconds to process the same image. Depending on where the camera is located, and how far or close people are expected to be to it, the OpenCV algorithm parameters may need to change to better fit the situation.
import numpy as np import cv2 import sys import datetime def draw_detections(img, rects, rects2, thickness = 2): for x, y, w, h in rects: pad_w, pad_h = int(0.15*w), int(0.05*h) cv2.rectangle(img, (x+pad_w, y+pad_h), (x+w-pad_w, y+h-pad_h), (0, 255, 0), thickness) print("Person Detected") for (x,y,w,h) in rects2: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),thickness) print("Face Detected") total = datetime.datetime.now() img = cv2.imread('/root/incoming.jpg') #optional resize of image to make processing faster #img = cv2.resize(img, (0,0), fx=0.5, fy=0.5) hog = cv2.HOGDescriptor() hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) peopleFound,a=hog.detectMultiScale(img, winStride=(8,8), padding=(16,16), scale=1.3) faceCascade = cv2.CascadeClassifier('/root/haarcascade_frontalface_alt.xml') facesFound = faceCascade.detectMultiScale(img,scaleFactor=1.1,minNeighbors=5,minSize=(30,30), flags = cv2.CASCADE_SCALE_IMAGE) draw_detections(img,peopleFound,facesFound) cv2.imwrite('/root/out_faceandpeople.jpg',img) print("[INFO] total took: {}s".format( (datetime.datetime.now() - total).total_seconds()))
To send an image to Twitter, the tweet is constructed in a function node using the msg.media as the image variable and the msg.payload as the tweet string.
Figure 8: Node-RED function message node details
The system can take pictures on demand as well. Node-RED monitors the same twitter feed for posts that contain “spy” or “Spy” and will post a current picture to Twitter. Posting a tweet with the word “spy” in it will trigger the Gateway to take a picture.
Figure 8: Node-RED flow for taking pictures on demand
Summary
This concludes the proof of concept to to create a Smarter Security Camer using Intel® IoT Gateway computing. The Wind River Linux Gateway comes with a number of tools pre-installed and ready to prototype quickly. From here the project can be further optimized, made more robust with security features, and even expanded to create smart lighting for rooms when a person is detected.
About the author
Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Internet of Things.
Notices
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
Intel, Wind River, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others
© 2016 Intel Corporation