NDCMS StorageResources

Back to NDCMS homepage

Users have access to a variety of storage locations. The key is knowing the tradeoffs of each, so you can identify what's the right resource for you to use. If you don't have a directory in any of the spaces listed, ask for help at ndt3-list@nd.edu.

Home Area:

Each user has a home directory on /afs/crc.nd.edu/user with 100GB of personal disk space that is backed up nightly. This is where you should keep software that you're developing, papers that you're writing, your thesis draft, etc. Basically, use this for anything that you would be very sad to have to recreate if it were accidentally deleted or lost due to hardware failure.

Scratch Space:

Also users get 500GB of non-backed up space in /scratch365/<username> and non-backed up space for small files in /store/smallfiles. There is no quota on /store/smallfiles but the total space is 80 TB and must be shared by all users. In general, this storage is useful to use for temporary files of intermediate sizes. If you need reasonable access performance for multiple jobs to the files (e.g. you're going to run more than ~100 jobs reading or writing files in the batch system) then don't use /store/smallfiles as the performance degrades severely. In that case, either use /scratch365 or /hadoop/store/user (see below).

Accessing the scratch space from grid jobs

Grid jobs running at ND (e.g: via CMS Connect) need to prepend /cms to the /scratch365 and /store/smallfiles directories (/cms/scratch365/<username> and /cms/store/smallfiles) due to standard requirements in the CMS Global Pool for singularity-enabled sites for non-hadoop storage spaces to be visible, but they point to the same storage location.

Hadoop Space:

Hadoop is mounted at /hadoop/store/user. The Hadoop file system (hdfs) is a different sort of files system than most. Hadoop breaks your data up into blocks of ~128 MB and scatters two copies of each block across multiple physical disks. It does this for two reasons: The replication makes the system more resilient against hardware failures and it also provides better performance when many different jobs are reading or writing to the system. Like /store/smallfiles, Hadoop doesn't have per user quotas, and there is a lot of space available (at the time of this writing, 644 TB of raw space, but remember that every TB you store takes up ~2 TB of space because of replication). Hadoop is also the file system that is accessible with CMS/grid tools like gfal and XRootD. You should use Hadoop whenever you have very large datasets, when you need to access your data using gfal or XRootD, or when you will be accessing your data with many parallel running jobs (anything more than 100). There are some caveats:

  • Hadoop doesn't handle very small files well. If you write large numbers of files with sizes on the order of MB, don't use Hadoop. For files in that size range, use /store/smallfiles or /scratch365
  • Hadoop doesn't provide posix access directly. This means that normally you can't use commands like ls or cp. We use something called FUSE to provide posix access to Hadoop, but FUSE can be broken if you try to read too much data too quickly. So, when running many batch jobs, its better to access the data directly using hdfs commands, or to use a tool, like XRootD or Lobster that does this for you. If you're running jobs on data in /hadoop and they are getting stuck or having Input/Output errors, you've probably crashed FUSE on some of the nodes. If this happens, ask for help ndt3-list@nd.edu.
  • ROOT cannot write directly into /hadoop/store/user. If your job is producing ROOT output, write it first to local disk (every worker node has local disk for this purpose) and then at the end of the job, copy the output to /hadoop/store/user (possibly using gfal to avoid problems with FUSE!). Again, if you have questions, ask on ndt3-list@nd.edu.

Lobster Working Storage

Lobster doesn't do well working out of your AFS home directory. When you run Lobster jobs, you should tell Lobster to make your working area in /tmpscratch/users/<username>. Space is limited and there are no user quotas, so monitor carefully and clean up old files. We reserve the right to clean out this space if someone is using too much and not playing nice with others!

AFS Space

This space is really for things that you have no where else to put, but don't want to delete yet, but also don't plan to access frequently or with many jobs. (E.g. old log files you want to keep for reference.) There are 18 data volumes accessible by all members of the group. Each volume has 2TB of afs space which is backed up nightly. Each member of the group in principle can use all of the space. In practice try to keep your files current and share with the other members of the group as needs arise. Note this space is not accessible via grid jobs.

The volumes are:

    /afs/crc.nd.edu/group/NDCMS/data01
    /afs/crc.nd.edu/group/NDCMS/data02
    /afs/crc.nd.edu/group/NDCMS/data03
    /afs/crc.nd.edu/group/NDCMS/data04
    /afs/crc.nd.edu/group/NDCMS/data05
    /afs/crc.nd.edu/group/NDCMS/data06
    /afs/crc.nd.edu/group/NDCMS/data07
    /afs/crc.nd.edu/group/NDCMS/data08
    /afs/crc.nd.edu/group/NDCMS/data09
    /afs/crc.nd.edu/group/NDCMS/data10
    /afs/crc.nd.edu/group/NDCMS/data11
    /afs/crc.nd.edu/group/NDCMS/data12
    /afs/crc.nd.edu/group/NDCMS/data13
    /afs/crc.nd.edu/group/NDCMS/data14
    /afs/crc.nd.edu/group/NDCMS/data15
    /afs/crc.nd.edu/group/NDCMS/data16
    /afs/crc.nd.edu/group/NDCMS/data17
    /afs/crc.nd.edu/group/NDCMS/data18

To check the quota use "fs lq" (short for “fileservice listquota”)


Storage space for PhEDEx

All CMS data are stored using the /store convention, Therefore we only need to map:

    /+store/(.*)

Translation rules for PFN to LFN (Physical File Name to Logical File Name):

    /hadoop/store ==> /store

To see how much space is available in the hadoop /store area, you can type the following from earth:

    hadoop fs -df -h /store