Blog Archives

OpenFOAM

OpenFOAM 4.1 is available on Koko. To load the software, please use the following steps: Connect to Koko via SSH with X11 forwarding or with X2go. Details found here. Open a terminal session and load the module using the command

Posted in How-To-Guides, HPC

How to run Fluent interactively

Before we begin, log into Koko via your favorite method with either a GUI or x11 forwarding, please click here for details. Open a terminal session. Load the Ansys module with “module load ansys”. Check how many licenses are available with

Posted in How-To-Guides, HPC

How to allocate and run interactive and GUI based jobs

GUI Jobs Users interested in executing a program on a node with X11 may use the “–x11” flag with srun. Example code: module load slurm matlab srun –x11 matlab module load slurm srun –x11 xterm   Not all jobs can

Posted in How-To-Guides, HPC, Resources

Perfsonar

We are happy to report that we are working on deploying Perfsonar nodes to aide in the measurement of network metrics. The following test nodes are currently online and being developed. https://perfsonar-research.hpc.fau.edu (131.91.171.216) https://perfsonar-bhric.hpc.fau.edu (131.91.171.215) https://perfsonar-eng.hpc.fau.edu

Posted in How-To-Guides, HPC

LAMMPS

LAMMPS may be run by loading the lmp_openmpi module and the openmpi/gcc module module load lmp_openmpi module load openmpi/gcc lmp_openmpi Note: We have seen that this is printing a warning message when starting. We are debugging but are interested to

Posted in How-To-Guides, HPC

FAU Slurm Queues

We provide several queues for submitting HPC. All compute nodes have been updated to Scientific Linux 7 and the partition/queue names have been changed to reflect this. shortq7 Minimum of 1 process Maximum of 30 nodes Maximum run time 2 hours

Posted in HPC

Submitting Jobs

Upload your job to your KoKo home directory using Globus, Filezilla or SCP. Create a script named {JOBNAME}.sh to start your job containing the following: #!/bin/sh #SBATCH – – partition=shortq #SBATCH – – ntasks=1 #SBATCH – – mem-per-cpu=1024 #SBATCH –

Posted in How-To-Guides, HPC

Executing Java Jobs

Upload your job to your KoKo home directory using Globus, Filezilla or SCP. Create a script named {JOBNAME}.sh to start your job containing the following: #!/bin/sh#SBATCH –partition=defq #SBATCH –ntasks=1 #SBATCH –mem-per-cpu=1024 #SBATCH –ntasks-per-node=1# Load modules, if needed, run staging tasks,

Posted in How-To-Guides, HPC

Transferring files

Uploading files to compute resources can be accomplished using command line tools such as SCP or Secure File Transfer tools such as FileZilla, Web Transfer tools such as OwnCloud, Globus Online, or Windows File Explorer Koko File Transfer in Windows:

Posted in How-To-Guides, HPC