Hernquist Group Cluster Resources

The cluster used in the Hernquist group is called Cannon. It is managed by FAS Research Computing (RC).

First it is a good idea to familiarize yourself with documentation provided by RC.

  • Quickstart Guide. Contains information about requesting an account, logging into the cluster, file transfer, and submitting simple jobs.
  • Running Jobs. Contains more detailed information about running jobs, e.g., running batch jobs. Also contains a section on the different partitions available. The partitions in this link are available to all cluster users. Members of the hernquist group have access to more partitions, outlined below.
  • Cluster Storage.
  • Transferring Files. More information on transferring files on the cluster and between the cluster and your local machine, including rsync and fpsync.
  • VDI Apps. If you'd like, e.g., a jupyter notebook on the cluster, you can do this very conveniently in a web browser. Other applications are also available.
  • Common Cluster Pitfalls.

Group Partitions

In addition to the partitions available in the Running Jobs document, we have access to the following partitions.

Partition Nodes Cores per node CPU Core Types Mem per Node (GB) Time Limit Max Jobs Max Cores MPI Suitable? GPU Capable?
hernquist 10 48 Intel "Cascade Lake" 184 7 days none none Yes No
hernquist_ice 12 64 Intel "Ice Lake" 499 7 days none none Yes No
itc_cluster 24 48 Intel "Cascade Lake" 184 7 days none none Yes No

Group Storage

There is a wide variety of storage options available. They are (replace abeane with your username):

  • home: /n/homeN/abeane. N is a number from 01 to 15. Note that ~ is a shortcut for your home directory, and you can find the exact path to your home directory with pwd. Home directories have a hard limit of 100 GB. Home directories are backed up. They are suitable for placing scripts used to generate data, installing software, etc.
  • scratch: /n/holyscratch01/hernquist_lab/abeane. Scratch storage is the highest performance storage available, but it is not backed up and files older than 90 days are deleted. The lab has a quota of 50 TB.
  • holystore01: /n/holystore01/LABS/hernquist_lab/Users/abeane. holystore01 is a larger file storage system. This system is suitable for long-term storage of large data products. It has a total capacity of 600 TB, but at the time of writing 587 TB is used. The total capacity of this storage is fixed and cannot be increased. Data is not backed up.
  • holylfs05: /n/holylfs05/LABS/hernquist_lab/Users/abeane. holylfs05 is mostly identical to holystore01. It has a total capacity of 300 TB, of which 222 TB is used at the time of writing. Unlike holystore01, our storage limit on holylfs05 can be increased in the future if needed. Data is not backed up.

More information can be found in the Cluster Storage page.

For holystore01 and holylfs05, if you do not have a directory in the Users directory, email RC (rchelp@fas.rc.edu ) to have them create one for you. There is also a Lab and Everyone directory in this storage. The idea is that files in Users are only accessible to you, in Lab accessible to people in the group, and Everyone to everyone on the cluster. I find this annoying and I just make my Users directory group accessible.

There also exists on each node a local scratch directory of size 200-300 GB. It is located at /scratch. It is the highest performance storage available, but data on it does not persist beyond the job. This can be used for extremely high I/O intensive jobs, but data generated must be moved somewhere else during the job. It is also good practice to delete any files you create in scratch before your job completes.

Tape storage is in principle available from RC. This is suitable for data that needs to be kept but does not need to be used in the near future. Our group has not used tape storage yet. If you have a large amount of data (>10 TB) that would be a good candidate for tape storage, please reach out.

Your personal usage on each storage system can be checked with standard df commands. To check group usage on holystore01 and holylfs05, you can use the command lfs quota -hg hernquist_lab /n/holylfs05.