User Tools

Site Tools



This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
mpi [2010/05/22 14:19] external edit
mpi [2011/01/23 10:31] (current)
Line 1: Line 1:
-**Message Passing Interface (MPI): The Concept**+===== MPI =====
-----+OpenMPI is one implementation of the Message Passing Interface protocol, there is also MPICH2.  There are various applications which benefit by being parallelized:
-The MPI interface is meant to provide essential virtual topology, synchronization, and communication functionality between a set of processes (that have been mapped to nodes/servers/computer instances) in a language-independent way, with language-specific syntax (bindings), plus a few features that are language-specific. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, each CPU (or core in a multicore machine) will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called **//mpirun//** or **//mpiexec//**.+  [[mpi:mpiblast|mpiBLAST]] 
 +  clustalw-mpi
-HPC environments are often measured in terms of FLoating point OPerations per Second (FLOPS) +See the homepage for more information:
- +
-**Condor** +
- +
----- +
-Machines sit idle for long periods of time, often while their users are busy doing other things**Condor takes this wasted computation time and puts it to good use**. The situation today matches that of yesterday, with the addition of clusters in the list of resources. These machines are often dedicated to tasks. Condor manages a cluster's effort efficiently, as well as handling other resources+
mpi.1274537972.txt.gz · Last modified: 2011/01/23 10:31 (external edit)