hpc_concepts
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| hpc_concepts [2009/11/17 11:18] – 172.26.0.166 | hpc_concepts [2010/05/22 14:19] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| - | **Message Passing Interface (MPI): The Concept** | + | ==== Message Passing Interface (MPI): The Concept |
| ---- | ---- | ||
| Line 9: | Line 9: | ||
| If you are simply looking for how to run an MPI application, | If you are simply looking for how to run an MPI application, | ||
| - | shell$ mpirun [ -np X ] [ --hostfile < | + | < |
| This will run X copies of < | This will run X copies of < | ||
| Line 16: | Line 16: | ||
| the use of a hostfile, or will default to running all X copies on the localhost), scheduling | the use of a hostfile, or will default to running all X copies on the localhost), scheduling | ||
| | | ||
| - | | + | |
| - | ===== Installation ===== | + | === Installation === |
| + | ---- | ||
| + | |||
| < | < | ||
| $ tar xfz openmpi-1.3.3.tar.gz | $ tar xfz openmpi-1.3.3.tar.gz | ||
| Line 26: | Line 29: | ||
| HPC environments are often measured in terms of FLoating point OPerations per Second (FLOPS) | HPC environments are often measured in terms of FLoating point OPerations per Second (FLOPS) | ||
| - | **Condor** | + | ==== Condor |
| ---- | ---- | ||
| Line 34: | Line 37: | ||
| http:// | http:// | ||
| - | **Sun Grid Engine (SGE)** | + | ==== Sun Grid Engine (SGE) ==== |
| ---- | ---- | ||
| Line 42: | Line 45: | ||
| - | **SLURM: A Highly Scalable Resource Manager** | + | ==== SLURM: A Highly Scalable Resource Manager |
| SLURM is an open-source resource manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work. https:// | SLURM is an open-source resource manager designed for Linux clusters of all sizes. It provides three key functions. First it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work. https:// | ||
| - | **TORQUE Resource Manager** | + | ==== TORQUE Resource Manager |
| TORQUE is an open source resource manager providing control over batch jobs and distributed compute nodes. It is a community effort based on the original *PBS project and, with more than 1,200 patches, has incorporated significant advances in the areas of scalability, | TORQUE is an open source resource manager providing control over batch jobs and distributed compute nodes. It is a community effort based on the original *PBS project and, with more than 1,200 patches, has incorporated significant advances in the areas of scalability, | ||
| - | **Platfrom LSF** | + | ==== Platfrom LSF ==== |
| [[platform_lsf|LSF]] is implemented as a resource manager for the HPC together with SGE. | [[platform_lsf|LSF]] is implemented as a resource manager for the HPC together with SGE. | ||
hpc_concepts.1258456700.txt.gz · Last modified: (external edit)
