Blog Archives

Python Virtual Environments

Instructions for creating Python virtual environments for versions 3.6 and 2.7. Load the python environment: – For Python 2.7:                $ module load anaconda2/4.4.0 – For Python 3.6:             

Posted in How-To-Guides, HPC

OpenFOAM Tutorials

To use the built in tutorials for OpenFOAM: Load the OpenFOAM/4.1 module with the command “module load OpenFOAM/4.1” Create a folder named “OpenFOAM” in your HOME directory using the command “mkdir $HOME/OpenFOAM” Create the environment variable $FOAM_RUN with the command

Posted in How-To-Guides, HPC


OpenFOAM 4.1 is available on Koko. To load the software, please use the following steps: Connect to Koko via SSH with X11 forwarding or with X2go. Details found here. Open a terminal session and load the module using the command

Posted in How-To-Guides, HPC

How to run Fluent interactively

Before we begin, log into Koko via your favorite method with either a GUI or x11 forwarding, please click here for details. Open a terminal session. Load the Ansys module with “module load ansys”. Check how many licenses are available with

Posted in How-To-Guides, HPC

How to allocate and run interactive and GUI based jobs

GUI Jobs Users interested in executing a program on a node with X11 may use the “–x11” flag with srun. Example code: module load slurm matlab srun –x11 matlab module load slurm srun –x11 xterm   Not all jobs can

Posted in How-To-Guides, HPC, Resources


We are happy to report that we are working on deploying Perfsonar nodes to aide in the measurement of network metrics. The following test nodes are currently online and being developed. ( (

Posted in How-To-Guides, HPC


LAMMPS may be run by loading the lmp_openmpi module and the openmpi/gcc module module load openmpi/gcc/64/1.10.1 gcc/5.2.0 lammps/30Mar18 # change 40 to the number of threads. We recommend multiples of 20 salloc -n 40 # where is your input script,

Posted in How-To-Guides, HPC

FAU Slurm Queues

We provide several queues for submitting HPC. All compute nodes have been updated to Scientific Linux 7 and the partition/queue names have been changed to reflect this. shortq7 Minimum of 1 process Maximum of 30 nodes Maximum run time 2 hours

Posted in HPC

Submitting Jobs on Koko using Slurm (salloc, srun, sbatch, sinfo, squeue)

Upload your code and data to your KoKo home directory using Globus, Filezilla or SCP. Choose the type of job you would like to run (salloc, srun, sbatch) salloc requests and holds an allocation on the cluster so you can

Posted in How-To-Guides, HPC

Executing Java Jobs

Upload your job to your KoKo home directory using Globus, Filezilla or SCP. Create a script named {JOBNAME}.sh to start your job containing the following: #!/bin/sh#SBATCH –partition=defq #SBATCH –ntasks=1 #SBATCH –mem-per-cpu=1024 #SBATCH –ntasks-per-node=1# Load modules, if needed, run staging tasks,

Posted in How-To-Guides, HPC