Introducing task-containers as an alternative to runtime-stacking
Proceedings of the 23rd European MPI Users' Group Meeting, 2016•dl.acm.org
The advent of many-core architectures poses new challenges to the MPI programming
model which has been designed for distributed memory message passing. It is now clear
that MPI will have to evolve in order to exploit shared-memory parallelism, either by
collaborating with other programming models (MPI+ X) or by introducing new shared-
memory approaches. This paper considers extensions to C and C++ to make it possible for
MPI Processes to run into threads. More generally, a thread-local storage (TLS) library is …
model which has been designed for distributed memory message passing. It is now clear
that MPI will have to evolve in order to exploit shared-memory parallelism, either by
collaborating with other programming models (MPI+ X) or by introducing new shared-
memory approaches. This paper considers extensions to C and C++ to make it possible for
MPI Processes to run into threads. More generally, a thread-local storage (TLS) library is …
The advent of many-core architectures poses new challenges to the MPI programming model which has been designed for distributed memory message passing. It is now clear that MPI will have to evolve in order to exploit shared-memory parallelism, either by collaborating with other programming models (MPI+X) or by introducing new shared-memory approaches. This paper considers extensions to C and C++ to make it possible for MPI Processes to run into threads. More generally, a thread-local storage (TLS) library is developed to simplify the collocation of arbitrary tasks and services in a shared-memory context called a task-container. The paper discusses how such containers simplify model and service mixing at the OS process level, eventually easing the collocation of arbitrary tasks with MPI processes in a runtime agnostic fashion, opening alternatives to runtime stacking.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果