Intro
Internet of Things is enabling our lives in new and interesting ways, but with it come the challenges of how to analyze and bring meaning to all the continuously generated data. One IoT trend in the home is the rise of security cameras, and not just one or two, but multiple ones for around the house and in each room to monitor the status. This creates massive amounts of data when images or movie files are being saved. Taking one house as an example, they have 12 cameras that generate around 5 GB per day taking over 180,000 images total. That is a massive amount of data to look through manually. Some cameras have built in motion sensors to only take images when a change is detected, and while this helps to reduce the noise, light changes, pets, fans, and things moving in the wind will still be picked up and have to be sorted through. To monitor for what is wanted OpenCV presents a promising solution, for the purposes of this paper it will be people and faces. OpenCV already has a number of pre-defined algorithms to search images for faces, people, and objects and can also be trained to recognize new ones.
This article is a proof of concept to explore quickly prototyping an analytics solution at the edge using the Intel IoT Gateway computing power to create a Smarter Security Camera.
Figure 1: Analyzed image from webcam with OpenCV detection markers
Set-up
It all starts with a Logitech C270 Webcam with HD 720P resolution and 2.4 GHz Intel Core 2 Duo. This webcam plugs into the USB port of the Intel Edison which turns it into an IP webcam streaming the video a website. Using the webcam with the Intel Edison allows for easy duplication of the camera “sensor” to be propagated to different locations around a home. The Intel IoT Gateway then captures images from the stream and uses OpenCV to analyze them. If the algorithms detects that there is a face or a person in view, it uploads the image to Twitter*.
Figure 2: Intel Edison and Webcam setup
Figure 3: Intel® IoT Gateway
Capturing the image
The webcam must be UVC compliant to ensure that it is compatible with the Intel Edison’s USB drivers, in this case the Logitech C270 Webcam is used. For a list of UVC compliant devices see this webpage here: http://www.ideasonboard.org/uvc/#devices. To use the USB slot the Intel Edison’s micro switch must be toggled up towards the USB slot, note that this will disable the micro-USB next to it and hence disable Ethernet, power (the external power supply must be plugged now instead of using the micro-USB slot as a power source), and Arduino sketch uploads. Also connect the Edison to the Gateway’s wifi hotspot to ensure it can see the webcam.
To ensure the USB webcam is working, type the following into a serial connection.
ls -l /dev/video0
A line similar to this one should appear:
crw-rw---- 1 root video 81, 0 May 6 22:36 /dev/video0
Otherwise, this line will appear indicated the camera is not found.
ls: cannot access /dev/video0: No such file or directory
In the early stages of the project, the Intel Edison was using the FFMEG library to capture an image and then send it over MQTT to the gateway. This method had some draw backs as each image took a few seconds just to be saved which was way too slow for practical application. To combat this and make images ready to the gateway on-demand, the setup switched to have the Intel Edison continuously streaming a feed that the gateway could capture from at any time. This was done using the mjpeg-streamer library, to install it on the Intel Edison.
Add the following lines to base-feeds.conf:
echo "src/gz all http://repo.opkg.net/edison/repo/all src/gz edison http://repo.opkg.net/edison/repo/edison src/gz core2-32 http://repo.opkg.net/edison/repo/core2-32">> /etc/opkg/base-feeds.conf
Update the repository index:
opkg update
And install:
opkg install mjpg-streamer
To the start the stream:
mjpg_streamer -i "input_uvc.so -n -f 30 -r 800x600" -o "output_http.so -p 8080 -w ./www"
It was decided to use the MJEG compressed format to keep the frame rate high. However YUV format is uncompressed which leaves more detail for OpenCV, so experiment with the tradeoffs.
To view the stream while on the same Wi-Fi network visit: http://localhost:8080/?action=stream, a still image of the feed can also be viewed by going to: http://localhost:8080/?action=snapshot. Where localhost is the IP address of the Intel Edison connected to the Gateway’s wifi. On the Intel Gateway side, it sends an http request to the snapshot and then saves the image to disk.
Gateway
The brains of the whole security camera is on the gateway. OpenCV was installed into a virtual python environment to create a clean and segmented environment for OpenCV and not interfere with the system Python and packages. Basic install instructions for OpenCV linux can be found here: http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html. These instructions need to be modified in order to install OpenCV and its dependencies on the Intel Wind River Gateway.
GCC, Git, and python2.7-dev are already installed.
Install CMake 2.6 or higher:
wget http://www.cmake.org/files/v3.2/cmake-3.2.2.tar.gz tar xf cmake-3.2.2.tar.gz cd cmake-3.2.2 ./configure make make install
As the Wind River Linux environment has no apt-get command, it can quickly become a challenge to install the needed development packages. An easy way around this is to first install them on other 64 bit Linux machine (running Ubuntu in this case) and then manually copy the files to the gateway. Full file list can be found on the Ubuntu site here: http://packages.ubuntu.com/. For example, for the libtiff4-dev package, files in /usr/include/<file> should go to the same location on the gateway and files in /usr/lib/x86_64-linux-gnu/<file> should got into /usr/lib/<file>. The full list of files can be found here: http://packages.ubuntu.com/precise/amd64/libtiff4-dev/filelist. Install and copy the files over for packages listed below.
sudo apt-get install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev sudo apt-get install libjpeg8-dev libpng12-dev libtiff4-dev libjasper-dev libv4l-dev
Install pip, this will help install a number of other dependencies.
wget https://bootstrap.pypa.io/get-pip.py python get-pip.py
Install the virutalenv, this will create a separate environment for OpenCV.
pip install virtualenv virtualenvwrapper
Once the virtualenv has been installed, create one called “cv.”
export WORKON_HOME=$HOME/.virtualenvs mkvirtualenv cv
Note that all the following steps are done while the “cv” environment is activated. Once “cv” has been created, it will activate the environment automatically in the current session. This can be seen in the command prompt at the beginning eg. (cv) root@WR-IDP-NAME. For future sessions it can be activated with the following command:
. ~/.virtualenvs/cv/bin/activate
And similarly be deactivated (do not deactivated it yet):
deactivate
Install numpy:
pip install numpy
Get the OpenCV Source Code:
cd ~ git clone https://github.com/Itseez/opencv.git cd opencv git checkout 3.0.0
And make it:
mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D INSTALL_C_EXAMPLES=ON \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \ -D BUILD_EXAMPLES=ON \ -D PYTHON_INCLUDE_DIR=/usr/include/python2.7/ \ -D PYTHON_INCLUDE_DIR2=/usr/include/python2.7 \ -D PYTHON_LIBRARY=/usr/lib64/libpython2.7.so \ -D PYTHON_PACKAGES_PATH=/usr/lib64/python2.7/site-packages/ \ -D BUILD_NEW_PYTHON_SUPPORT=ON \ -D PYTHON2_LIBRARY=/usr/lib64/libpython2.7.so \ -D BUILD_opencv_python3=OFF \ -D BUILD_opencv_python2=ON ..
It may be the case that the cv2.so file is not created. If this is the case, make OpenCV on the host Linux machine as well and copy the file over to /usr/lib64/python2.7/site-packages.
Figure 4: Webcam capture of people outside with OpenCV detection markers
To quickly create a program and connect a large number of capabilities and services together as like with this project, Node-RED* was used. Node-RED is a quick prototyping tool that allows the user to visually wire together hardware devices, APIs, and various services. It also comes pre-installed on the gateway, just make sure to update to the latest version.
Figure 5: Node-RED Flow
Once a message is injected in at the “Start” node, the script will loop continuously after processing the image or encountering an error. A few nodes of note for the setup are the http request, the python script, and the function message for the tweet. The “Repeat” node is to visually simplify the repeat flow into one node instead of pointing all three flows back to the beginning.
The “http request” node sends a GET message to the Intel Edison IP webcam’s snapshot URL. If it is successful the flow saves the image, otherwise it tweets an error message about the webcam.
Figure 6: Node-RED http GET request node details
To run the python script, create an “exec” node (it will be in the advanced section in Node-RED) with the command “/root/.virtualenvs/cv/bin/python2.7 /root/PeopleDetection.py”. This allows the script to run in the virtual python environment where OpenCV is installed.
Figure 7: Node-RED exec node details
The python script itself is fairly simple, it checks the image for people using the HOG algorithm and then looks for faces using the haarcasade frontal face alt algorithm that comes installed with openCV. It also saves out an image with boxes draw around found people and faces. The code below is by no means optimized for our proof of concept beyond tweaking some of the algorithm inputs to suit our purposes. There is the option of scaling the image down before analyzing it to reduce the time, see the code snippet below for how to do that. It takes the Gateway roughly 0.33 seconds to process an image. For comparison, an Intel Edison takes around 10 seconds to process the same image. Depending on where the camera is located and how far or close people are expected to be to it, the OpenCV algorithm parameters may need to change to better fit the situation.
import numpy as np import cv2 import sys import datetime def draw_detections(img, rects, rects2, thickness = 2): for x, y, w, h in rects: pad_w, pad_h = int(0.15*w), int(0.05*h) cv2.rectangle(img, (x+pad_w, y+pad_h), (x+w-pad_w, y+h-pad_h), (0, 255, 0), thickness) print("Person Detected") for (x,y,w,h) in rects2: cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),thickness) print("Face Detected") total = datetime.datetime.now() img = cv2.imread('/root/incoming.jpg') #optional resize of image to make processing faster #img = cv2.resize(img, (0,0), fx=0.5, fy=0.5) hog = cv2.HOGDescriptor() hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector()) peopleFound,a=hog.detectMultiScale(img, winStride=(8,8), padding=(16,16), scale=1.3) faceCascade = cv2.CascadeClassifier('/root/haarcascade_frontalface_alt.xml') facesFound = faceCascade.detectMultiScale(img,scaleFactor=1.1,minNeighbors=5,minSize=(30,30), flags = cv2.CASCADE_SCALE_IMAGE) draw_detections(img,peopleFound,facesFound) cv2.imwrite('/root/out_faceandpeople.jpg',img) print("[INFO] total took: {}s".format( (datetime.datetime.now() - total).total_seconds()))
To send an image to Twitter, the tweet is constructed in a function node using the msg.media as the image variable and the msg.payload as the tweet string.
Figure 8: Node-RED function message node details
And of course, the system needs to be able to take pictures on demand as well. Node-RED monitors the same twitter feed for posts that contain “spy” or “Spy” and will post a current picture to Twitter. So posting a tweet with the word “spy” in it will trigger the Gateway to take a picture.
Figure 8: Node-RED flow for taking pictures on demand
Summary
This concludes the proof of concept computing at the edge smarter security camera gateway. The Wind River Linux Gateway comes with a number of tools pre-installed and ready to prototype quickly. From here the project can be further optimized, made more robust with security features, and even expanded to create smart lighting for rooms when a person is detected.
About the author
Whitney Foster is a software engineer at Intel in the Software Solutions Group working on scale enabling projects for Internet of Things.
Notices
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others
© 2016 Intel Corporation.