MARCC has two main file systems.  A high performance file system using Lustre and a ZFS file system. As much as possible, users should conduct all intensive compute work using the Lustre file system.  The ZFS file system should be used mainly to store large amounts of data as necessary.

1 – Lustre. This is a parallel distributed file system, used for data intensive cluster computing. The Lustre file system is scalable and it is usually composed of many servers (metadata (MDS) and object storage, (OSS) with possibly thousands of clients (compute nodes). Our Lustre file system has 2.1 petabytes of storage and very fast throughput.

Each PI receives a default allocation of 1 terabyte. This allocation should be enough for most researchers. However, we do realize many researchers will need more space.  An increase to 10 terabytes is available upon request from the PI. Allocations beyond 10TB are also possible upon request but on a temporary basis. There are two main directories on each Lustre allocation. “work” (a symlink to /scratch/group/PI-name) is a group directory where members of a research group can easily share files. The second directory is “scratch” (a symlink to /scratch/users/userid). This is a local directory for each user. All data on this directory is private to the user. Please note, these Lustre allocations are for temporary data and are NOT backed up.

2 – ZFS. This file system is considered low performance. It is a combination of a file system and a logical volume manager originally designed by Sun Microsystems. It is usually mounted via NFSv4 to the compute nodes.
2.1 $HOME directory.  It has a quota of 20GB. It is meant to keep critical files like software applications.  It is backed up to a remote location. HOME directories are private to each user.
2.2 “data”,  (a symlink to /data/PI-name), this is a group allocation with a default quota of 1 terabyte.  Group members can share data using this directory.  The quota should be enough for most research groups but it can be increased per request from the PI up to 10TB.  This “data” file system is backed up to a remote location, which is used also as a disaster recovery option.
2.3 We do realize many groups may need to handle larger amounts of data. A “work-zfs” allocation can be requested by the PI with up to 100 terabytes. This is a single copy of the data. No automatic backups.  If backups are required there is a fee of $40.00 per terabyte per year.
2.4 For PIs that may need more than 100 terabyte allocations, there are two possible solutions .  The PIs may purchase the additional space (beyond 100TB) at $40.00 per TB per year or they can request the allocation be granted by the respective school.
2.5 Large allocations are available on a temporary basis and with approval from the school.

Every HOME directory has soft links to “data”, “scratch”, “work” and if available to “work-zfs”.

Note:  These file systems are designed for “active data”. That is, data that is being analyzed. These file systems are NOT archival systems. If PIs need to keep data at rest for long periods of time MARCC recommends to contact the Library for options.