The Modern Code Developer Challenge is now adding projects on Artificial Intelligence (AI) and Internet of things! (IOT)
Five Projects, Five Highly Talented Student researchers, Nine Weeks at CERN between July and September
Interactive support from an open community of developers, scientists, students, people passionate about science
Winner gets an all-expense paid trip to highlight their project at the upcoming Intel® HPC Developers Conference& SC17’ supercomputing conference in Denver, Colorado, USA
Your voice matters – most compelling project wins
Get access to Intel tools, trainings& next generation hardware
Starting in July!
As part of its ongoing support of the world-wide student developer community and advancement of science, Intel® Software has partnered with CERN through CERN openlab to sponsor the Intel® Modern Code Developer Challenge. The goal for Intel is to give budding developers the opportunity to use modern programming methods to improve code that helps move science forward. Take a look at the winners from the previous challenge here!
The Challenge will take place from July - October, 2017 with the winners announced in November, 2017 at the Intel® HPC Developer Conference.
Check back on this site soon for more information!
1) Smash-simulation software: Teaching algorithms to be faster at simulating particle-collision events
Physicists widely use a software toolkit called GEANT4 to simulate what will happen when a particular kind of particle hits a particular kind of material in a particle detector. In fact, this toolkit is so popular that it is also used by researchers in other fields who want to predict how particles will interact with other matter: it’s used to assess radiation hazards in space, for commercial air travel, in medical imaging, and even to optimise scanning systems for cargo security.
An international team, led by researchers at CERN, is now working to develop a new version of this simulation toolkit, called GeantV. This work is supported by a CERN openlab project with Intel on code modernisation. GeantV will improve physics accuracy and boost performance on modern computing architectures.
The team behind GeantV is currently implementing a ‘deep-learning' tool that will be used to make simulation faster. The goal of this project is to write a flexible mini-application that can be used to support the efforts to train the deep neural network on distributed computing systems.
2) Connecting the dots: Using machine learning to better identify the particles produced by collision events
The particle detectors at CERN are like cathedral-sized 3D digital cameras, capable of recording hundreds of millions of collision events per second. The detectors consist of multiple ‘layers’ of detecting equipment, designed to recognise different types of charged particles produced by the collisions at the heart of the detector. As the charged particles fly outwards through the various layers of the detector, they leave traces, or ‘hits’.
Tracking is the art of connecting the hits to recreate trajectories, thus helping researchers to understand more about and identify the particles. The algorithms used to reconstruct the collision events by identifying which dots belong to which charged particles can be very computationally expensive. And, with the rate of particle collisions in the LHC set to be further increased over the coming decade, it’s important to be able to identify particle tracks as efficiently as possible.
Many track-finding algorithms start by building ‘track seeds’: groups of two or three hits that are potentially compatible with one another. Compatibility between hits can also be inferred from what are known as ‘hit shapes’. These are akin to footprints; the shape of a hit depends on the energy released in the layer, the crossing angle of the hit at the detector, and on the type of particle.
This project investigates the use of machine-learning techniques to help recognise these hit shapes more efficiently. The project will explore the use of state-of-the-art many-core architectures, such as the Intel Xeon Phi processor, for this work.
3) Cells in the cloud: Running biological simulations more efficiently with cloud computing
BioDynaMo is one of CERN openlab’s knowledge-sharing projects. It is part of CERN openlab’s collaboration with Intel on code modernisation, working on methods to ensure that scientific software makes full use of the computing potential offered by today’s cutting-edge hardware technologies.
It is a joint effort between CERN, Newcastle University, Innopolis University, and Kazan Federal University to design and build a scalable and flexible platform for rapid simulation of biological tissue development.
The project focuses initially on the area of brain tissue simulation, drawing inspiration from existing, but low-performance software frameworks. By using the code to simulate the development of the normal and diseased brain, neuroscientists hope to be able to learn more about the causes of — and identify potential treatments for — disorders such as epilepsy and schizophrenia.
Late 2015 and early 2016 saw algorithms already written in Java code ported to C++. Once porting was completed, work was carried out to optimise the code for modern computer processors and co-processors. In order to be able to address ambitious research questions, however, more computational power will be needed. Work will, therefore, be undertaken to adapt the code for running using high-performance computing resources over the cloud. This project focuses on adding network support for the single-node simulator and prototyping the computation management across many nodes.
4) Disaster relief: Helping computers to get better at recognising objects in satellite maps created by a UN agency
UNOSAT is part of the United Nations Institute for Training and Research (UNITAR). It provides a rapid front-line service to turn satellite imagery into information that can aid disaster-response teams. By delivering imagery analysis and satellite solutions to relief and development organizations — both within and outside the UN system — UNOSAT helps to make a difference in critical areas such as humanitarian relief, human security, and development planning.
Since 2001, UNOSAT has been based at CERN and is supported by CERN's IT Department in the work it does. This partnership means UNOSAT can benefit from CERN's IT infrastructure whenever the situation requires, enabling the UN to be at the forefront of satellite-analysis technology. Specialists in geographic information systems and in the analysis of satellite data, supported by IT engineers and policy experts, ensure a dedicated service to the international humanitarian and development communities 24 hours a day, seven days a week.
CERN openlab and UNOSAT are currently exploring new approaches to image analysis and automated feature recognition to ease the task of identifying different classes of objects from satellite maps. This project evaluates available machine-learning-based feature-extraction algorithms. It also investigates the potential for optimising these algorithms for running on state-of-the-art many-core architectures, such as the Intel Xeon Phi processor.
5) IoT at the LHC: Integrating ‘internet-of-things’ devices into the control systems for the Large Hadron Collider
The Large Hadron Collider (LHC) accelerates particles to over 99.9999% of the speed of light. It is the most complex machine ever built, relying on a wide range industrial control systems for proper functioning.
This project will focus on integrating modern ‘systems-on-a-chip’ devices into the LHC control systems. The new, embedded ‘systems-on-a-chip’ available on the market are sufficiently powerful to run fully-fledged operating systems and complex algorithms. Such devices can also be easily enriched with a wide range of different sensors and communication controllers.
The ‘systems-on-a-chip’ devices will be integrated into the LHC control systems in line with the ‘internet of things’ (IoT) paradigm, meaning they will be able to communicate via an overlaying cloud-computing service. It should also be possible to perform simple analyses on the devices themselves, such as filtering, pre-processing, conditioning, monitoring, etc. By exploiting the IoT devices’ processing power in this manner, the goal is to reduce the network load within the entire control infrastructure and ensure that applications are not disrupted in case of limited or intermittent network connectivity.