| Product Overview | FAQ | Documentation and Tutorials | Papers | Discussions and Feedback | Blog |
Parallelism Without the Pain
Intel® Concurrent Collections for C++ simplifies parallelism and at the same time let's you exploit the full parallel potential of your application.
The following downloads are available under theWhat If Pre-Release License Agreementlicense.
Including required TBB bits Choose one of these if in doubt | Linux* 64bit | |||
---|---|---|---|---|
Windows* 64bit | Windows* 32bit | |||
Without TBB bits Requires existing TBB >= 4.1 Update 1 | Windows* 64bit | Windows* 32bit |
The Idea
Traditional approaches to parallelism let the programer express the parallelisim exlicitly. This makes achieving parallelism unnecesserily hard and uneffective. With Intel® Concurrent Collections for C++ the programmer does not think about what should go in parallel; instead he/she specifies the semantic dependencies of his algorithm and so defines the ordering constraints only: Concurrent Collections (CnC) lets the programmer define what cannot go in parallel. The model allows the programmer to specify high-level computational steps including inputs and outputs but he/she does not express when or where things should be executed. The when and where are handled by the runtime and/or an optional tuning layer. Code within the computational steps is written using standard serial constructs of the C++ language. Data is either local to a computational step or it is explicitly produced and consumed by them. An application in this programming model supports multiple styles of parallelism (e.g., data, task, pipeline parallel). While the interface between the computational steps and the runtime system remains unchanged, a wide range of runtime systems may target different architectures (e.g., shared memory, distributed) or support different scheduling methodologies (e.g., static or dynamic). With Intel® Concurrent Collections for C++ we provide a parallel runtime system for shared and distributed memory systems. Our goal in supporting a strict separation of concerns between the specification of the application and the optimization of its execution on a specific architecture is to help ease the transition to parallel architectures for programmers who are not parallelism experts. For excellent performance results which we were able to achieve with Intel® Concurrent Collections for C++ please read here.
What's new in version 0.9?
- New step/thread affinity control: use step-tuner to assign affinity of steps to threads
- Added thread-pinning: pin threads to CPU-cores
- New cancellation feature for step-tuners: cancel individual steps (in flight or yet to come), all steps, or custom cancellation sets
- Added tuning capabilites for CnC::parallel_for: switch on/off checking dependencies, priority, affinity, depends and preschedule
- New support for Intel(R) Xeon Phi(TM) (MIC): native and mixed CPU/MIC
- Improved instrumentation hooks for ITAC: Using collection names as given in collection-constructors
- Cleaner/simpler hashing and equality definition for custom tags
- New samples (UTS, NQueens (with cancellation), parsec/dedup, Floyd-Warshall) and improved samples
- Closed memory leak on distributed memory
- Other bug fixes etc.
- Added support for Visual Studio* 2012, dropped support for Visual Studio* 2005 (Windows* only)
- Require TBB 4.1 (was 4.0)
- Switched to gcc 4.3 ABI (Linux* only)
See also the Release Notes.
Documentation and Tutorials
- Getting Started (Installation, Requirements, using samples, etc.)
- CnC Tutorial (slides) (slides and code)
- Step-by-Step API Tutorial (HTML)(PDF)
- Methodology of Concurrent Collections: Eight fundamental patterns
- Frequently Asked Questions (FAQ)
- API documentation
- Release Notes
Papers, Presentations, Research
- The Concurrent Collections Programming Model
- Parallel Programming For Distributed Memory Without The Pain
- Performance Evaluation of Concurrent Collections on High-Performance Multicore Computing Systems
- Measuring the Overhead of Intel C++ CnC over Gauss-Jordan Elimination
- Segmentation for the Brain Connectome using a Multi-Scale Parallel Computing Architecture
- Cluster Computing using Intel Concurrent Collections
- Habanero Concurrent Collections Project at Rice University
Discussions, Report Problems or Leave Feedback
To stay in touch with the Intel® Concurrent Collections team and the community, we provide a new email-list you can subscribe to or just watch online:
http://tech.groups.yahoo.com/group/intel_concurrent_collections/.
Aternatively, to report a problem or leave feedback on this product, please visit the "Whatif Alpha Forum" to participate in forum discussions about Intel® Concurrent Collections:
http://software.intel.com/en-us/forums/intel-concurrent-collections-for-cc/
.