Cluster Details

Cluster PI

  • Prof. Gretar Tryggvason
Description Nodes Cores/Node Total Cores Memory/Node
HP SL160z G6 servers 34 12 408 24GB

Processor Details

  • Dual Six-Core Intel Nehalem processors
  • Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Network Fabric

  • QDR Infiniband - 50% oversubscription
  • 1Gb Ethernet

Grid Engine Details


  • d6cneh084 - d6cneh117

Target Hostgroup

  • @tryggvason

Example Submission Script

#$ -M netid@nd.edu         # Email address for job notification
#$ -m abe                  # Send mail when job begins, ends and aborts
#$ -pe mpi-12 36           # Specify parallel environment and legal core size
#$ -q *@@tryggvason        # Specify queue
#$ -N jobname              # Specify job name
# Run application
mpiexec -n 36 ./app.exe

Code Compilation and Performance


For optimum performance, the MVAPICH2 compiler should be used when compiling your MPI codes to run over the Infiniband network. This compiler can be selected on CRC systems as follows:

module load mvapich2


CRC provides the MPICH2 and OpenMPI compilers for compiling your MPI codes to run over the Ethernet network. These compilers can be selected on CRC systems as follows:

module load mpich2
module load openmpi

Resource Idleness and Sharing

In the interest of maximizing available computing resources for CRC users, the CRC implements an overflow sharing policy for idle nodes on faculty-partnership clusters. If there are users waiting in the general queue to run jobs, and faculty cluster nodes are idle, CRC staff will periodically allow the waiting users to migrate their jobs to your idle cluster nodes to alleviate backlog and improve system utilization.


  • these migrated jobs can be cancelled at the request of the cluster PI via direct correspondence with CRC Support
  • the cancellation procedure allows migrated jobs a maximum of four (4) hours to complete before they are killed by CRC staff

Cluster Benchmarks

  • Not Currently Available