Intel® Student Ambassador Forum - NYC
The first Intel® Student Ambassador Forum for the Artificial Intelligence (AI) program was held as a companion event to the O’Reilly AI Summit in NYC earlier this summer. This sold-out event was attended by five New York universities and three Student Ambassadors were able to present their work to the audience during live demos.
You can see the excitement and energy of the event come through in this overview video. Bob Duffy explains how this event, a first of its kind, is truly led by students. Students are doing the researching, presenting their work to an audience of students, and working with other students to collaborate and learn. Scott Apeland talks about how these students are shaping the future of AI through the work they are doing with Machine Learning, Deep Learning, and Intel optimized frameworks and tools.
The Intel® Student Ambassador Program, and highly engaging forums like this, show how excited students are about the future and their eagerness to have access to the technology to drive innovation. At Intel, our goal is to help them succeed through access to technology and experts as well as guidance in bringing their projects to life and presenting their work to an audience.
Face It – A Hairstyle Recommendation App by Pallab Paul
Student Ambassador Pallab Paul of Rutger’s University showcased his Early Innovation Project to the student audience. You can watch the technical research video for yourself here. Pallab and his team are working on an app that young men can use to find the right hairstyle, and even facial hair style, for their face shape.
Pallab and his friends figured they had about a 50/50 chance of getting a good haircut. This is a problem; young men want to experiment with trendy hairstyles, but they want to look good while doing it. So they set about seeing if they could solve the problem through the use of deep learning technology.
They started with using computer vision to detect a person’s face, then they needed to identify the shape of the face. Knowing this would help people to find a style that worked for their face shape. They combined this data with some personal information and preferences to come up with personalized recommendations for the user.
Throughout the process they’ve used Intel’s OpenCV Library, Haar-Cascades classifiers, and Intel optimized TensorFlow, Java, and Python on Intel’s Xeon Phi Cluster to create a convolutional neural network (CNN) to train their dataset to give accurate results.
They’ve run into a few roadblocks, the biggest one being the accuracy of the dataset. They are now working to increase the sample size of their dataset to achieve better results. Their plans also include integrating the UI to turn it into a real, usable app. Further down the road they would like to possibly add in emotion detection. For example when a sample style is presented to them if they smile the app would note in their preferences that they liked that style, but alternatively if they frown, the app would note that and not show them that particular style the next time.
Functional Connectivity of Epileptic Brains – Research Presented by Panuwat Janwattanapong
Student Ambassador Panuwat Janwattanapong of Florida International University presenting his research which focuses on data analysis by applying deep learning and AI to the research. His lab works to assess people with disabilities and disorders and find ways to assist them and advance research to help doctors with better diagnosis. Watch his presentation here.
Panuwat’s research centers on Epilepsy which affects 1% of the total population, across all age ranges. And for 90% of those affected, the disorder negatively affects their quality of life due to its unpredictability, impacting their ability to drive, socialize, and work.
The benefit of the research is help those with the disorder, as well as the people/family around them, to have a better quality of life. They intend to do this by helping to make faster diagnosis of the condition, classifying whether the seizures are focalized or generalized, and attempting to predict seizures.
They are researching to see if they can accomplish all of these goals by analyzing electroencephalogram (EEG) results. EEG’s are an electrophysiological monitoring method that can capture the neural activities of the brain. The benefits to using EEG’s is that they are non-invasive, have high temporal resolution, more affordable compared to other techniques, and they have no side effects for the patient.
Their approach currently consists of using 19 sensors, but could go up as high as 300, placed on the patient’s head. By looking at the EEG in a more advanced way they are working to look at the parts of the brain that interacts with itself to find the functional connectivity. The resulting brain waves operate in four different frequency bands allowing the researches to create a connectivity matrix.
Once they have a matrix created, they work to reduce the noise ratio. Noise is created from many things including heartbeats, eye blinks, nearby electricity, etc. One way they work to do this is to also use an EKG during the EEG. The EKG creates a template of heartbeats that they can then easily remove from the EEG brain wave results.
Panuwat began his research using Matlab but quickly had too much data for it to run. Since joining the Intel Student Ambassador program he now has additional resources such as the Intel distribution for Python which has allowed him increase performance through analyzing data faster and having more models to work with.
The next step Panuwat plans to take is to segment the data. His plan is to separate the connectivity matrix into three second windows in order to not overestimate the connectivity of the brain. Once connected he can then visualize it in a head map plot. Panuwat also wants to use Intel’s Xeon Phi cluster to further analyze the EEG and to plot the Eigen value.
Sensor Fingerprint for Mobile Identity – A Proof of Concept by Srivignessh Pacham Sri Srinivasan
Student Ambassador Srivignessh Pacham Sri Srinivasan of Arizona State University described his AI research designed to uniquely identify mobile devices through noise in their sensors. Watch Srivignessh’s presentation here.
Srivignessh explains how your mobile device, by default, has a lot of sensors inside it which are available to monitor and create a unique pattern to identify your specific device. All mobile and wearable sensors have their own unique manufacturing defects and Srivignessh has developed a way to learn these calibrations in order to use them for identification of devices, creating a fingerprint for each mobile device.
The easiest sensor to do this with is your mobile device’s accelerometer. The accelerometer is what rotates your screen when you turn your phone sideways, and by default this is turned on. A possible use case is for home routers: your network can identify your device through its sensor fingerprint and authenticate you without the need of a password, therefore keeping your network safe from unauthenticated devices.
Accelerometers measure acceleration in x, y, and z directions with linear calibration errors. This creates two calibration metrics for sensitivity, providing six metrics to use. In analyzing these metrics, the next nearest neighbor (KNN) algorithm can be applied to map the sensor reading to the device. The algorithm uses the top 3 neighbors to help classify the device and displays its cluster on a map.
Srivignessh is using Javascript, Python, Django web framework and a cloud server currently. He plans to be moving the data to Intel’s Xeon Phi cluster soon for better performance. He also clarified that this process doesn’t parse out any of your sensitive data, it only reads the sensor imperfections based on the manufactured device.
He sees this project scaling for use in stadiums, airport lounges, and other general single-use areas, when there shouldn’t be a need for you to give credentials in order to identify your specific device trying to access the Wi-Fi network.
Intel® Student Ambassador Forum - Honolulu
Intel hosted its second Student Ambassador Forum in conjunction with the Computer Vision Pattern Recognition (CVPR) conference in Hawaii. More than twelve universities and 300 registrants attended this event to collaborate on projects, discuss industry trends, and learn what to expect with the next generation of AI. Read more about what they are doing In the AI space.
Three Intel Student Ambassadors spoke about their use cases and applications of AI, all using a variety of Intel’s optimized frameworks and tools. Suraj Ravishankar of Arizona State University demonstrated his vehicle detection project which could be extremely useful for self-driving cars as the model gets trained to recognize birds, people, stop signs, and more. Nikhil Murthy of MIT shared his skip-thought vector project which is trained on Intel’s Nervana Neon framework to auto-process new financial documents daily, highlighting topics of interest to the company. And Arun Ramesh Srivasta of Arizona State University showcased his project which highlights the rotation angles of both arms via myo armbands used to reconstruct hand joint rotations. 3D hand models are then animated to generate videos of gestures from the raw data and then use a pre-trained CNN model to classify them.
The highlight of the event however was the official launch of the Movidius Neural Compute Stick. This developer kit with an embedded deep learning platform was designed to further fuel development in AI. Needless to say the Student Ambassadors and attendees were very interested in learning more about what this device could do. An important aspect of the Ambassador Program is access to experts, and Intel’s AI experts held a fireside chat with students focusing on AI today and areas students may want to focus on when considering a career in this field.
Want to learn more about the Intel® Student Developer Program for Artificial Intelligence?
You can read about our program, get a program overview, and we also encourage you to check out Intel® Developer Mesh to learn more about the various projects that our student community is working on.
Interested in more information? Contact Niven Singh