Typical HPC cluster configuration
strategy behind High Performance
Computing (HPC) is
to “divide and
conquer.” By dividing a complex problem
into smaller component tasks that can be worked on simultaneously, the
can often be solved more quickly. This can help save time and
well as monetary costs. A typical HPC computing system consists of one
node and multiple compute nodes connected via standard network
interconnects. All of the nodes in a
typical HPC run an
industry standard operating system, which typically offers substantial
over proprietary operating systems.
The master node of the cluster acts as a server for the
Network File System (NFS), job-scheduling, security, and acting as a
gateway to end-users. The master node assigns each of the compute nodes
with one or more tasks to perform as the larger task is broken into
sub-functions. As a gateway, the master node allows users to gain
access to the compute nodes.
The sole task of the compute nodes is to execute
assigned tasks in parallel. A compute node does not have a keyboard,
mouse, video card, or monitor. Access to client nodes is provided via
remote connections through the master node.
ILRI HPC Specifications
The ILRI HPC facility consists of a Dell PowerEdge R910 Server with:
- 32 core Intel Xeon X7560 Processor
- 128GB Memory
- 8 Tbyte disk storage
The operating system is Rocks v5.4 (Maverick). The nodes are connected via
gigabit Ethernet connectors to a 48 port GigE switch. Backups are handled by an
Exabyte 221L tape library.
ILRI HPC server