Dqcopt001-dqcopt064

Cluster Details

Cluster PI

  • Prof. Joannes Westerink
Description Nodes Cores/Node Total Cores Memory/Node
Sun Fire X2200 servers 64 8 512 8GB

Processor Details

  • Dual Quad-core 2.2 GHz AMD Opteron processors

Network Fabric

  • DDR Infiniband - 50% oversubscription

Grid Engine Details

Hostnames

  • dqcopt001 - dqcopt064

Target Hostgroup

  • @westerink_dqcopt

Example Submission Script

#!/bin/csh
 
#$ -M netid@nd.edu          # Email address for job notification
#$ -m abe                   # Send mail when job begins, ends and aborts
#$ -pe mpi-8 128           # Specify parallel environment and legal core size
#$ -q *@@westerink_dqcopt       # Specify queue
#$ -N jobname               # Specify job name
 
# Run application
mpiexec -n 128 ./app.exe

Code Compilation and Performance

Infiniband

For optimum performance, the MVAPICH2 compiler should be used when compiling your MPI codes to run over the Infiniband network. This compiler can be selected on CRC systems as follows:

module load mvapich2

Ethernet

CRC provides the MPICH2 and OpenMPI compilers for compiling your MPI codes to run over the Ethernet network. These compilers can be selected on CRC systems as follows:

module load mpich2
module load openmpi

Resource Idleness and Sharing

In the interest of maximizing available computing resources for CRC users, the CRC implements an overflow sharing policy for idle nodes on faculty-partnership clusters. If there are users waiting in the general queue to run jobs, and faculty cluster nodes are idle, CRC staff will periodically allow the waiting users to migrate their jobs to your idle cluster nodes to alleviate backlog and improve system utilization.

Note:

  • these migrated jobs can be cancelled at the request of the cluster PI via direct correspondence with CRC Support
  • the cancellation procedure allows migrated jobs a maximum of four (4) hours to complete before they are killed by CRC staff

Cluster Benchmarks

N/A