Instructions for getting started with Gaussian on FAU’s High-Performance Computing Cluster
- Request an account to access the HPC facility.
– Go to https://helpdesk.fau.edu/TDClient/Requests/ServiceCatalog?CategoryID=1480
– Request HPC user account.
- Download software to enable remote access to Koko from the desktop.
– E.g., Download X2Go from http://wiki.x2go.org/doku.php
– Then follow these instructions: https://hpc.fau.edu/x2go/
- Download file transfer software to your desktop (optional).
– There are several options: e.g., download Filezilla https://filezilla-project.org/download.php?type=client
– Follow these instructions: https://hpc.fau.edu/resources-2/transferring-files/.
– In Filezilla specify host as sftp://koko.hpc.fau.edu.
- Log into KoKo.
- From the Linux desktop open a terminal session.
- Change shell to csh using the command
- To launch Gaussview use the command
- Create and save desired Gaussian input file (e.g., filename.com)
- Gaussian calculations should NOT be run interactively by submitting the job through Gaussview since this will take up resources on the host node. Instead, you must submit your job through the job scheduler SLURM to one of the worker nodes.
– Following is a simple batch script file example. You may use any text editor to create the text file (e.g., filename.sh):
#!/bin/csh #(this line is necessary) #SBATCH -N 1 #(set the number of nodes to be employed, in this case 1 – see note below**) #SBATCH -n 20 #(specify number of CPUs to be employed – should equal number in Gaussian input file: %nprocshared=20) #SBATCH -A username #(specify account) #SBATCH -p shortq #(specify queue: shortq or longq) #SBATCH -J filename #(specify job name) #SBATCH -o filename.out #(slurm output filename) #SBATCH --error=filename.err #(slurm error filename)
10. Execute your batch file by typing the following in the terminal:
11. Some useful commands
– For monitoring resource availability and job status include sinfo, squeue, sacct, and scancel (to terminate a job).
- **Linda is a Gaussian utility that enables parallel computing across multiple nodes. This is especially useful for very large computations that require a large amount of computing resources. Most calculations should NOT require the use of Linda and in most cases the full use of a single node should provide sufficient computational power. If you feel that the use of multiple nodes (and therefore Linda) is warranted for your job please contact Eric Borenstein at High Performance Computing and Dr. Andrew Terentis (Department of Chemistry) for prior approval and special instructions on how to employ Linda.
- Each worker node is equipped with 2 x 10-core processors. Gaussian cannot utilize hyper-threading so the maximum number of processes per node is 20 (i.e., %nprocshared=20 is the maximum for a single node job).
- 128 GB RAM and 1 GB SATA scratch (disk space) are available per node.
- Maximum run times for jobs in shortq and longq are 2 hours and 7 days, respectively.
- Bear in mind that assigning more CPUs and more RAM to a job will NOT necessarily lead to shorter computing times. Typically there is an optimum set of parameters that must be found through some trial and error. Optimum hardware parameters will depend on the size and the type of calculation to be performed.
*Written By Andrew Terentis