Disk Management

Back to NDCMS homepage

Storage Resources

Jobs that are configured to stage out to T3_US_NotreDame, put their output files here:

 /hadoop/store

PhEDEx transfer jobs will make a directory (if neccessary) corresponding to the name of the dataset. CRAB jobs will make a directory (if neccessary) and write to the following path:

 /hadoop/store/user/{parent_dataset}/{publish_data_name}/{random_string}/

where "publish_data_name" was defined in your original crab.cfg file.

Deleting Obsolete Files

The directories created in the /store area are owned by "uscms01", and therefore cannot be directly modified by you. There is however, a way you can use your grid certificate to tell the system that you're allowed to edit the /store area. You will need to have run

 voms-proxy-init

beforehand, and have a valid certificate. Then, you can use the following command on earth:

 globus-job-run earth.crc.nd.edu /bin/chmod o+w -R /hadoop/store/user/{username}

This will change the permissions on your user directory so that you can go in and delete any unwanted files or directories.