Quantcast
Channel: Intel Developer Zone Articles
Viewing all articles
Browse latest Browse all 3384

Intel® and MobileODT* Competition on Kaggle*: 4th Place Winner

$
0
0

Kaggle* Master Luis Andre Dutra e Silva Develops Two AI Solutions to Improve the Precision and Accuracy of Cervical Cancer Screening

Editor's note: This is one in a series of case studies showcasing finalists in the Kaggle* Competition sponsored by Intel® and MobileODT*. The goal of this competition was to use artificial intelligence to improve the precision and accuracy of cervical cancer screening.

Intel® and MobileODT*

Abstract

More than 1,000 participants from over 800 data scientist teams developed algorithms to accurately identify a woman's cervix type based on images as part of the Intel and MobileODT* Competition on Kaggle*. Such identification can help prevent ineffectual treatments and allow health care providers to offer proper referrals for cases requiring more advanced treatment.

This case study details the process used by fourth-place winner Luis Andre Dutra e Silva – including his innovative use of Intel® technology-based tools – to develop an algorithm to improve the process of cervical cancer screenings. To do so, he developed two solutions then narrowed it to one for submission.

Kaggle Competitions: Data Scientists Solve Real-world Problems Using Machine Learning

Woman scientist/developer/tester

The goal of Kaggle competitions is to challenge and incentivize data scientists globally to create machine-learning solutions for real-world problems in a wide range of industries and disciplines. In this particular competition – sponsored by Intel and MobileODT, developer of mobile diagnostic tools – more than 1,000 participants from over 800 data scientist teams each developed algorithms to correctly classify cervix types based oncervical images.

In the screening process for cervical cancer, some patients require further testing while others don't. Because this decision is so critical, an algorithm-aided determination can significantly improve the quality and efficiency of cervical cancer screening for these patients. The challenge for each team was to develop the most efficient deep learning model for that purpose.

A Kaggle Master Competitor Rises to the Challenge of Cervical Cancer Screening

A veteran of multiple Kaggle competitions, Luis Andre Dutra e Silva was drawn to this challenge by its noble purpose, "and the possibility to explore new technologies that could be used further in other fields," he said. A federal auditor in the Brazilian Court of Audit, Silva plans to use AI knowledge in multiple applications in his job.

Two Approaches to Code Optimization

As a solo entrant, Silva determined from the beginning to try two different approaches and later verify which was best.

Solution 1 was based on the paper "Supervised Learning of Semantics-Preserving Hash via Deep Convolutional Neural Networks" (deep CNNs) by Huei-Fang Yang, Kevin Lin and Chu-Song Chen dated February 14, 2017. It promoted the concept of deriving binary hash codes representing the most prominent features of an image training a deep CNN. The network proposed by the authors is called Supervised Semantics Preserving Deep Hashing (SSDH). After training the SSDH network on each type of cervix, the semantic hashes would be the input of a Gradient Boosting Machine model using Boost as a classifier. The following image from the paper explains the inner workings of the proposed neural network architecture:


Figure 1. Supervised Semantics Preserving Deep Hashing. The hash function is constructed as a latent layer with K units between the deep layers and the output. (Huei-Fang Yang, Kevin Lin, Chu-Song Chen. Supervised Learning of Semantics-Preserving Hash via Deep Convolutional Neural Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017, 1-15. K. Lin, H.-F. Yang, J.-H. Hsiao, C.-S. Chen. Deep Learning of Binary Hash Codes for Fast Image Retrieval. CVPR Workshop on Deep Learning in Computer Vision, DeepVision June 2015.

Solution 2 was based on training a U-Net that would be capable of generating bounding boxes for each of the three types of cervix and, finally, making an ensemble of four classification models based on the automatically generated bounding boxes of the competition's test set. It was based on the article, "U-Net: Convolutional Networks for Biomedical Image Segmentation" by Olaf Ronneberger, Philipp Fischer and Thomas Brox dated May 18, 2015.


Figure 2.Illustration from the article "U-Net: Convolutional Networks for Biomedical Image Segmentation." (University of Freiburg, Germany, 2015)

The Right Choice of Hardware and Software Tools Puts Silva in the Money

Silva adopted an open approach toward choosing different software configurations and hardware devices. But it was the commitment to working with Intel® technology that earned him his fourth-place finish in this Kaggle competition for "Best Use of Intel Tools"– an honor claiming a $20,000 prize.

After checking out all available tools optimized for Intel® architecture, he recompiled them on an Intel® Xeon Phi™ coprocessor-based workstation. "Since I have been a hardware enthusiast for a long time now, I have two excellent Intel® workstations at home," he said. Both were built from scratch with Intel® Xeon® processors.

Equipped with all the necessary hardware tools, various software alternatives were chosen to explore the suitability to accomplish both solutions.

A Step-by-step Process

Silva's first step was to obtain an SSDH model from GitHub*. The SSDH neural network is represented by this graph:


Figure 3. A representation of the SSDH neural network. Image obtained from NVIDIA Digits open source tool.

His next task was to compile Berkeley Vision and Learning Center (BVLC) Caffe*, with Intel® Math Kernel Library (Intel® MKL) as Basic Linear Algebra Subprograms (BLAS) library and NVIDIA Collective Communications Library (NCCL) for GPUs' intercommunications. He then trained the SSDH network using the four GPUs; training time was approximately nine to 10 hours. Silva created a pycaffe script to extract the semantic hashes from each image of the training set and then used the extracted hashes to train an XGBoost model to learn each type of cervix.

Only after this point did Silva submit his results to the Kaggle competition; they were good enough to be deemed feasible. With the Kaggle competition feedback, he started his second solution.

Silva started by training the U-Net with bounding boxes of each cervix type from the training set. He used forward passes of the trained Keras U-Net to obtain the test set bounding boxes. Using only the training set Regions of Interest of cervix types, Silva trained and ensembled four Caffe Models to classify the test Region of Interest (ROIs) as one of the three types of cervix.

Training the Networks

Training for each of the classification models (two GoogleNet and two AlexNet) in the second solution approach was done using Caffe with Intel® Deep Learning SDK to train the networks. "I chose the Intel DL SDK because it had image augmentation on the fly and it was a good test for the tools I had used so far," he said. "Using the Intel Deep Learning SDK, the training time was comparable to GPU and certainly the Intel® software tools and hardware must have a fundamental role for that performance."

Overcoming the Lack of Medical Training

Silva's greatest obstacle, understandably, was his limited medical background – he was incapable of distinguishing between the three cervix types just by observing them. "If I had that knowledge, I could make some preprocessing in the images in order to make each type more evident to the classifiers," he said. "But since it is not possible to have that knowledge in a short period of time, I trusted my cross-validation algorithm in order to be sure I was on the right path."

Results and Key Findings: What Set This Approach Apart

The first solution involved a complete knowledge of Caffe building and installation, because the customized version of that framework is more than one year old. Nevertheless, the SSDH model was considerably efficient in the task of producing semantic hashes that represented each type of image.

The second solution was inspired in real world medical imaging tools for deep learning and it demonstrated that the U-Net architecture was really the best fit for the problem statement.

After 25,000 iterations, Silva was able to achieve a plateau of 81% accuracy of the SSDH model. Other results and findings from both his models are detailed in the charts below. Pictures were generated by Tensorboard service, Intel Deep Learning Training Tool and Microsoft Excel.


Figure 4. Illustration of results and key findings. Graph A shows the accuracy of the SSDH model, which was trained with the full image dataset. After iteration 25000 it plateaus at approximately 81%. Graph B depicts Caffe* SSDH model loss during 10 hours of training. Graph C demonstrates Keras UNet validation dice coefficient scalar. Although it is very unstable, it shows a slight trend toward increasing. Graph D shows Keras UNet training dice coefficient indicating a stable trend of increasing in the training set.



Figure 5. Illustration of results and key findings. Panel A demonstrates Intel® Deep Learning SDK and Caffe* AlexNet model 1 with augmentation and light fine-tuning. Panel B shows Intel® Deep Learning SDK and Caffe* GoogleNet model 2 with augmentation and original weights.

Learn More About Intel Initiatives in AI

Intel commends the AI developers who contributed their time and talent to help improve diagnosis and treatment for this life-threatening disease. Committed to helping scale AI solutions through the developer community, Intel makes AI training and tools broadly accessible through Intel® AI Academy.

Take part as AI drives the next big wave of computing, delivering solutions that create, use and analyze the massive amounts of data that are generated every minute.

Sign up with Intel AI Academy to get the latest tools, optimized frameworks, and training for artificial intelligence, machine learning, and deep learning.


Viewing all articles
Browse latest Browse all 3384

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>