Frequently Asked Questions (FAQs)
Lustre: 1 TByte per group (/scratch/users/userid and /scratch/groups/groupid)
ssh login.marcc.jhu.edu -l userid A two-factor authentication code is needed and a robust password
This research project (or part of this research project) was conducted using computational resources (and/or scientific computing services) at the Maryland Advanced Research Computing Center (MARCC).”
Please feel free to edit it and/or include more details.
The Deans of the schools will make decisions on how to allocate resources.
sbalance. It will give information about utilization by all members of the group.
interact -usage"for options.
Users who need to run jobs immediately may be able to find out if resources are available. Use the command:
A/I/O/T = Allocated/Idle/Out/Total
The parallel and gpuk80 partitions (queues) have three different type of Intel processors. The original Haswell nodes have 120GB RAM and 24 cores. The Broadwell nodes have 120GB RAM and 28 cores. The newest set of nodes have Skylake processors with 90GB RAM and 24 cores. The SLURM batch utility is set so that parallel jobs stay within a unique architecture. This is done using a keyword (–constraint). The default is set to use an exclusive span of Haswell, Broadwell, or Skylake processors. If you want to use the Skylake processors you need to add this keyword to your script:
#SBATCH -C skylake
Also make sure the total memory is no higher than 90GB, when using the skylake nodes. The keywords are:
The executable will run on Haswell, Broadwell, Ivy-bridge, and Skylake processors. Note that the performance may not be the same as compiling for a particular processor.
DTNs are a set of dedicated nodes for file transfer. These servers are GlobusConnect end points and should be used to transfer large amounts of data.
ssh dtn2.marcc.jhu.edu(with your username and passwd)
scp largefile.ext userid@your-destination
Note that the speed is limited by the connectivity at your destination
- From your machine to MARCC:
scp largefile.ext email@example.com:~/
- Use the Globus connect end point
- Request a GlobusConnect account
- Login into your globus connect account
- Select the end points (MARCC)
- Authenticate to your end points
- Select the file(s) to transfer
- Start the file transfer
- If you need to transfer many (Thousands) of small files:
- Compress many files into a tar file of at least 100GB in size. This will give better performance and will not ‘break’ the data transfer node. For example: “tar -zcvf junk.tgz JUNK”. This command will compress all the files in directory JUNK into the compressed file junk.tgz
- Follow the same process as above
- Please note that if you have terabytes of data to move, the DTN will give better performance if you split them into several chunks instead of one big file
ssh dtn2.marcc.jhu.edu(with your username, TFA and password) [make sure you connect to dtn2.marcc.jhu.edu]
- module load aspera
ascp-marccis an alias to “
ascp -T -l8G -i /software/apps/aspera/126.96.36.1994/etc/asperaweb_id_dsa.openssh” : -T do not encrypt, -l8G 8000MB bandwidth. You can change these parameters but use the ascp command
- To download a file from ncbi:
ascp-marcc firstname.lastname@example.org:gene/DATA /scratch/users/userid"
- Download Filezilla (Web search)
- Install Filezilla
- Launch Filezilla. Your local machine files and folders should be visible on the left side
- Click on the top left “icon” or click File-> Site Manager. A new window pops up
- Click on New site and name it “MARCC”
- Click on “General”
- Host: login.marcc.jhu.edu Port 22 (Type)
- Protocol: SFTP – SSH File Transfer Protocol (select)
- Logon Type: Interactive (select)
- User: Your MARCC userid (email@example.com) (Type)
- Password: Leave blank (recommended)
- Click on “Transfer Settings”
- Select the Limit number of simultaneous connections and set it to “1”
- Click on “Connect”
- You should be connected. MARCC files and Folders should be visible on the right side
- Click and Drag files/folders
That is it.