
Industry-proven
Create Strongly Scalable Parallel Applications
GPI-2 implements the Global Address Space Programming Interface (GASPI).

Features & Benefits
The API for Scalable Applications
Based on the GASPI specification.

Scalability & Flexibility
Lean and easy to learn interface allow for maximum flexibility and scalability.

One-sided Communication
RDMA and one-sided communication is the default.

Asynchronous
Asynchronous communication model supplemented by remote completions.

Thread-safe
All primitives are thread-safe and allow the efficient use of all cores and hybrid implementations.

Portability
Support for x86, ARM, RISC-V and PowerPC as well as all major HPC interconnects.

Fault tolerant
Non-local operations feature timeouts where an application can react to failures.
Performance
Performance, Performance, Performance
GPI-2 aims at high performance on different applications and domains.

Seismic Processing
Excellent Scalability
As the size of data keeps increasing, applications needs to adapt and support the largest possible infrastructures.
Linear Solvers & CFD
Best in-class Performance
GPI-2 is used in different different domains with exemplary performance.


Visualization
Advanced Solutions
The PGAS-like approach enables different and innovative approaches to algorithms.
Latest News & Articles
Keep up-to-date with our latest news
Frequently Asked Questions
Got Questions?
Get in touch if you have further questions.
What is GASPI?
GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. GASPI targets a paradigm shift from bulk-synchronous two-sided communication patterns towards an asynchronous communication and execution model. To that end GASPI leverages one-sided RDMA-driven communication with remote completion in a Partitioned Global Address Space.
Is GASPI ready for industrial applications?
The GASPI specification originates from Fraunhofer ITWM’s PGAS API, GPI, whose development started in 2005. GPI offers an efficient, robust and scalable programming model which has been used in many of Fraunhofer ITWM’s industrial projects and in the meantime has completely replaced the use of MPI within the projects executed by the ITWM HPC Competence Centre.
How does GASPI differ from MPI?
Similar to MPI, GASPI is an API for parallel computing on distributed memory architectures.and relies on SPMD/MPMD execution. Unlike MPI however, GASPI targets asynchronous data-flow implementations rather than bulk-synchronous message exchange. Contrary to MPI, GASPI allows for a highly flexible configuration of the required resources and features low-level support for fault-tolerant execution.
How does GASPI compare with other PGAS approaches?
In contrast to other efforts in the PGAS community, GASPI is neither a new language (like e.g. Chapel from Cray), nor an extension to a language (like e.g. Co-Array Fortran). Instead — very much in the spirit of MPI — it complements existing languages like C/C++ or Fortran with a PGAS API which enables the application to leverage the concept of the Partitioned Global Address Space. In contrast to, for example, OpenShmem or Global Arrays, GASPI is not limited to a single memory model, but provides configurable and yet globally accessible memory segments. GASPI is interoperable with MPI and allows for incremental porting of legacy applications.
Do I need to port my MPI application completely to GASPI or is there an incremental way?
GASPI is interoperable with MPI so that porting an application can indeed be done step-by-step.

Try It Free. It’s Open-Source
Just get started. Download and explore at your own pace.