Open MPI

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Open MPI
Repository
  • {{URL|example.com|optional display text}}Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
Engine
    Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
    Operating systemUnix, Linux, macOS, FreeBSD[1]
    PlatformCross-platform
    TypeLibrary
    LicenseNew BSD License
    Websitewww.open-mpi.org

    Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009,[2] and K computer, the fastest supercomputer from June 2011 to June 2012.[3][4]

    Overview

    [edit | edit source]

    Open MPI represents the merger between three well-known MPI implementations:

    with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.

    The Open MPI developers selected these MPI implementations as excelling in one or more areas. Open MPI aims to use the best ideas and technologies from the individual projects and create one world-class open-source MPI implementation that excels in all areas. The Open MPI project specifies several top-level goals:

    • to create a free, open source software, peer-reviewed, production-quality complete MPI-3.0 implementation
    • to provide extremely high, competitive performance (low latency or high bandwidth)
    • to involve the high-performance computing community directly with external development and feedback (vendors, 3rd party researchers, users, etc.)
    • to provide a stable platform for 3rd-party research and commercial development
    • to help prevent the "forking problem" common to other MPI projects[5]
    • to support a wide variety of high-performance computing platforms and environments

    Code modules

    [edit | edit source]

    The Open MPI code has 3 major code modules:

    • OMPI - MPI code
    • ORTE - the Open Run-Time Environment
    • OPAL - the Open Portable Access Layer

    Commercial implementations

    [edit | edit source]
    • Sun HPC Cluster Tools - beginning with version 7, Sun switched to Open MPI
    • Bullx MPI—In 2010 Bull announced the release of bullx MPI, based on Open MPI[6]

    Consortium

    [edit | edit source]
    File:Hwloc.png
    Memory hierarchy of a four-socket AMD Bulldozer server as detected by hwloc's lstopo tool

    Open MPI development is performed within a consortium of many industrial and academic partners. The consortium also covers several other software projects such as the hwloc (Hardware Locality) library which takes care of discovering and modeling the topology of parallel platforms.

    See also

    [edit | edit source]

    References

    [edit | edit source]
    1. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    4. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    5. ^ Preventing forking is a goal; how will you enforce that?
    6. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    [edit | edit source]