D8copt039-d8copt086

Cluster Details

Cluster PI

  • Prof. Ed Maginn
Description Nodes Cores/Node Total Cores Memory/Node
HP Proliant SL165z G7 servers 48 16 768 24GB

Processor Details

  • Dual Eight-core 2.3 GHz AMD Opteron processors

Network Fabric

  • 1Gb Ethernet

Grid Engine Details

Hostnames

  • d8copt039 - d8copt086

Target Hostgroup

  • @maginn_d8copt

Example Submission Script

#!/bin/csh
 
#$ -M netid@nd.edu          # Email address for job notification
#$ -m abe                   # Send mail when job begins, ends and aborts
#$ -pe mpi-16 48            # Specify parallel environment and legal core size
#$ -q *@@maginn_d8copt      # Specify queue
#$ -N jobname               # Specify job name
 
# Run application
mpiexec -n 48 ./app.exe

Code Compilation and Performance

Ethernet

CRC provides the MPICH2 and OpenMPI compilers for compiling your MPI codes to run over the Ethernet network. These compilers can be selected on CRC systems as follows:

module load mpich2
module load openmpi

Resource Idleness and Sharing

In the interest of maximizing available computing resources for CRC users, the CRC implements an overflow sharing policy for idle nodes on faculty-partnership clusters. If there are users waiting in the general queue to run jobs, and faculty cluster nodes are idle, CRC staff will periodically allow the waiting users to migrate their jobs to your idle cluster nodes to alleviate backlog and improve system utilization.

Note:

  • these migrated jobs can be cancelled at the request of the cluster PI via direct correspondence with CRC Support
  • the cancellation procedure allows migrated jobs a maximum of four (4) hours to complete before they are killed by CRC staff

Cluster Benchmarks

  • None Currently Available