CRC Wiki
CRC Wiki
Log in

CRC Blog

From CRC Wiki

This communication tool is targeted to provide CRC news releases to the Notre Dame research community. We welcome your comments and questions through utilization of the discussion tab.

CRC NEWS - OCT 2009 - Nov Equipment Upgrades and EOL for Old Environment

1) 2009-2010 EQUIPMENT UPGRADES

The CRC is undertaking a multi part equipment upgrade during the 2009-2010 school year. These upgrades were discussed at multiple open CRC user information sessions this past summer and are summarized here:

- High Throughput Computing Upgrade: replace 360 x2100 servers Fall 2009 - High Performance Computing Upgrade: multiple partner clusters Fall 2009/Spring 2009 - Core Network Upgrades: upgrades to core routers and 10GigE evals Fall 2009 - SMP Machine Acquisition: Spring 2010 - High Performance Parallel File System: Spring 2010/Summer 2010

Over the past couple of months CRC staff have been evaluating, testing, and pricing systems from Sun, HP, IBM, Dell, SGI and Penguin Computing to serve in a High Throughput Computing capacity. The systems will replace the 4 year old Sun X2100 opteron nodes.

After extensive evaluations it was decided to replace the Sun X2100s with Hewlett Packard (HP) Model DL165 G6 nodes - Each node contains dual AMD Model 2431 Istanbul (6 core) Opteron processors providing a total of 12 cores per node. These HP systems will offer 1GB of RAM per core for a total of 12 GB, and a 160 GB SATA disk. which will offer roughly 100 GB of local scratch space. The total core count for the dcopt upgrade will go from 720 to 4,320 nodes, with 4.3 TB of RAM. In addition these systems offer some of the best power efficiencies available (watt/core) and will help lower cost of ownership in the upcoming years.

The CRC has aggressively negotiated bulk pricing on these servers. Faculty interested in purchasing (identical machines) in line with our purchase order should contact the CRC.


2) MIGRATION TO THE "NEW" CRC ENVIRONMENT

The CRC is actively moving all production systems from the old to the new software environment. This new environment features an upgrade to the Red Hat 5.4 operating system, SGE 6.2 job queuing system, and the crc.nd.edu AFS cell for home directories with a 100GB default quota.

The new environment was initially made available a year ago to selected beta users and more recently general CRC users - with your help we have made many changes and updates to numerous applications.

The "old" environment at this time generally consists of the single CPU dual core Sun x2100 (dcopt) nodes. These machines will be replaced in November (see below).

Following the Nov equipment upgrade, less than 10 compute nodes will be left in the old environment for legacy applications


3) HP SERVER INSTALLATION & FINAL MOVE TOWARD THE NEW ENVIRONMENT

The CRC "old" environment at this time generally consists of the Sun x2100 (dcopt) nodes.These nodes will be replaced with the new HP servers and configured for the new software environment.

It is expected that the installation of HP nodes will begin in mid November and be completed prior to Thanksgiving. As part of the upgrade it is likely that one full scale outage will be planned. Users will be notified of the exact time and dates as they become available.

A few machines (~10) will remain in the "old" environment for those using legacy applications that will not be ported to the new environment. The remaining machines in the old environment will only be patched for security purposes and all user support for this environment will End of Life with the end of this calendar year.


GIS Listserv Created to Support Growing ND GIS Community


Scheduled Outage - Sunday May 4th

The Center for Research Computing announces a scheduled outage and upgrades on Sunday May 4th. Union Station electrical upgrades will be preformed to replace aging UPS equipment. In correlation with this scheduled power outage CRC staff will be coordinating multiple server software and network upgrades. As the outage involves primary power maintenance, all systems will be shutdown. Users are advised to schedule job submissions accordingly. Please contact the CRC staff at crcsupport@nd.edu for further information.


New Large Memory Machines Available

Contributed by P. Brenner

The CRC annouces the availability of 8 large memory machines for the CRC user community. Each machine has 32GB of RAM and 4 compute cores. Information on accessing these machines can be found on the CRC wiki at the page listed below.

Submitting Large Memory Jobs


New CRC User Support Resources

Contributed by P. Brenner

For the Fall 2007 semester two new CRC user support resources have been implemented. First, all email support routed through crcsupport@nd.edu is now stored in a searchable listserv archive. Second, a CRC wiki has been established with a focus to collaboratively deliver the most current CRC documentation to the user community.

The crcsupport@nd.edu email archive will be open for search and viewing by the Notre Dame community. This listserv is setup for archival purposes requiring no user subscriptions nor receipt of additional emails. The archive will grow into a 24x7 searchable knowledge base to assist users in understanding and resolving issues related to utilization of CRC resources. A link to the archive search page will be available on both the CRC webpage (crc.nd.edu) and CRC wiki (crcmedia.hpcc.nd.edu/wiki).

The CRC wiki has been established to promote current and collaborative documentation of the interface and utilization of evolving CRC software and hardware resources. CRC users wishing to contribute to documentation for which they have particular expertise will be welcomed. The CRC wiki can be accessed at crcmedia.hpcc.nd.edu/wiki. Requests for new or updated documentation can be logged directly on the wiki or from crcsupport@nd.edu.


User Group Meeting

Contributed by E. Bensman

The CRC will host a User Group meeting on Monday, September 17, 2007 at McKenna Hall. It will start with presentations by the CRC staff on the State of CRC at 10:30 a.m. followed by lunch and panel discussion on current and future computational needs at Notre Dame at noon. For more information and to reserve a box lunch visit http://crc.nd.edu/registration/usergroup.shtml


Hardware Update:Cluster A

Contributed by: R. Sudlow

Last July the CRC staff upgraded the compute facilities located at Union Station. The 128 nodes of the Sun V60x computer cluster formerly known as "Cluster A" were removed from service. These V60x systems known as xeon001-xeon128 in the queue structure were just over three years old.

One rack (32 nodes) of the xeon architecture was replaced with 36 Dell SC1435 dual dual core servers. This rack was funded by Dr. Edward Maginn in Chemical and Biomoleular Engineering. His research group will have priority access to this system although unused cycles will be available to the CRC users. Three racks (96 nodes) of the xeon architecture were replaced with 108 Sun X2200 dual dual core servers.

The Dell and Sun systems are 1U (rack unit) in height and have 2.6 GHz AMD Model 2218 opteron processors. In addition, the new systems have a total of 8 GB RAM per system and 2 GB RAM per processor core. This doubles the amount of RAM per processor that was available on the old xeon systems and allows users the use of a SMP (symmetrical multiprocessing) system with double the processors (4) and four times the memory (8 GB). The Dell servers have 80 GB 7200 RPM SATA hard drive and the Sun X2200 servers have 146 GB 15,000 RPM SAS drive.

These new systems are labeled ddcopt001-ddcopt144. The ddcopt is a mnemonic for dual dual core opteron (2 processors).


CRC Core Switch

Contributed by: R. Sudlow

This spring the CRC purchased an Extreme Networks Black Diamond 8810 switch. This switch acts as the hub for the CRC network and allows us to connect multiple stacks together using 10 Gb fiber interfaces. This switch allows significant flexibility and capability to support high performance networking within the CRC environment at Union Station. The Black Diamond switch allows direct 10 Gb fiber connections to a single host. Currently we have two storage servers Sun X4500 "Thumpers" connected using this interface. Notre Dame was also a "Beta" tester for Extreme's new CX4 copper card in the Black Diamond. In the future these CX4 interfaces will provide 10 Gb copper interfaces at a significantly lower cost.


Networking Upgrades

Contributed by: R. Sudlow

In addition to the systems being replaced in our compute facilities this summer, the networking infrastructure for the new systems was upgraded with Extreme Networks Summit 450e stackable switches. Each system has two separate 1 Gb connections - one connection is a private network connection used for administrative purposes and the other is used for general public user traffic. The switches for the public interface are connected together using "stacks". The four switches in the racks of the new systems form a "stack" connected using dual 10 Gb rings - thus providing very high bandwidth (20 Gb) between the racks. The stack then connects using a 10 Gb fiber to our new core switch.


Module List: Bi-Annual Maintenance

Contributed by: JC Ducom

Users may have noticed that the Module List has received a thorough renovation. Many of the major packages are now at the latest vendor version. Some of the programs that have been removed from the list may still be available in the /opt/und directory, i.e., the mathematical libraries FFTW, BLAS, ACML are under the unified directory: opt/und/mathlib.

This Module List will undergo maintenance twice a year during the winter and summer breaks. Our goal is to have only two versions of each program in the list, the default version which is loaded automatically at log time and the previous version that is available via the command module load. If a vendor releases an update between maintenance cycles, the new version will be available as a module and users will be notified via the CRC-Users listserv.

Users are strongly encouraged to test new versions and report any problems before it becomes the default.


Fall 2007 Training

Contributed by: E. Bensman

Building on your positive feedback from the user training classes this past spring, the CRC will offer a wider variety of courses with more hands-on training. Courses offered this fall include: CRC Basics, CRC "Hands-on", Grid Computing Basics, Grid "Hands-on" Training, Introduction to Unix/Linux, Selecting a Programming Language, Building and Submitting Scrips and Code Optimization & Debugging.

Detailed course descriptions and schedules with class dates, times and locations are provided at http://crc.nd.edu/information/training.shtml

These non-credit training classes are all offered free of charge. Course attendees will receive a certificate of course completion at the end of the class. The hands-on courses build upon the earlier basics classes and should be taken in sequence. Pre-requisites and minimum requirements for course completion are stated in the class descriptions at the link provided above.


CRC Blog Archives