Ansys is installed on KoKo and may be accessed using the latest ansys module. The module will detect group membership and point users at the most correct license server.

At this time only College of Engineering and Computer Science students may run Ansys. Other users will need to install a license to their home directory or point Ansys to their applicable license server. Assistance is available to assist users installing licenses.

module load ansys

Fluent requires a GUI to run so we recommend using X11 forwarding when attempting to access many Ansys tools such as Fluent. Users should review Ansys manual for details.

Running Ansys Workbench requires the use of an X11 session. We recommend users take advantage of X2Go since X11 forwarding tends to be slower and less reliable. Please note that you must be a member of the College of Engineering to use Ansys applications. Please request to be added to the group via the Help Desk.

  • Once connected via X2Go start a new XTerminal and execute the following.
    - module load ansys
    - run the command "ansyscheck" to verify the number of available Ansys licenses
    - srun -n <number> -p <queue> --x11 --pty 

    • -n represents the number of CPUs needed. Please see the SLURM queues for maximum number of nodes based on queue.
    • -p represents the queue needed for the job. Please see SLURM queues for more information.

How to Run Ansys Fluent Jobs in Slurm

Fluent Usage

Fluent jobs can be:

  • serial (single task)
  • parallel (several tasks across one or more nodes)
Journal file

You should first prepare your journal file. Here is a simple example:
; Fluent Example Input File
; Read case file
/file/read-case LIRJ.cas
; Set the number of time steps and iterations/step
/solve/iterate 3000
; Save Case & Data files
/file/write-data LIRJ.dat
/exit yes

A relevant section for journal files can be found in the CFD online FAQ for Fluent, here.
Serial Job

An example submission file for a serial job could be as follows:
#SBATCH -n 1 # only allocate 1 task
#SBATCH -t 08:00:00 # upper limit of 8 hours to complete the job
#SBATCH -A <accountName> # your project name - contact Ops if unsure what this is
#SBATCH -J fluent1 # sensible name for the job
#SBATCH -p longq # Selected Q, if you want more then 2 hours use the "longq"
module add ansys

export FLUENT_GUI=off

time fluent 2ddp -g -i <journalFile> > fluent1.out 2> fluent1.err

The example above will run one task, with standard output goint to the fluent1.out file, and error output going to the fluent1.err file.
Parallel Job
To run several tasks in parallel on one or more nodes, the submission file could be as follows (abisko – for akka, just change the number of nodes (N) you ask for, remembering that there are only 8 cores per node):
#SBATCH -N 5 # allocate 25 nodes for the job
#SBATCH -n 96 # 96 tasks total
#SBATCH --exclusive # no other jobs on the nodes while job is running
#SBATCH -t 04:00:00 # upper time limit of 4 hours for the job
#SBATCH -A <accountName> # the account this job will be submitted under
#SBATCH -J fluentP1 # sensible name for the job
#SBATCH -p longq # Selected Q, if you want more then 2 hours use the "longq"
module add ansysexport FLUENT_GUI=off
if [ -z “$SLURM_NPROCS” ]; then
N=$(( $(echo $SLURM_TASKS_PER_NODE | sed -r ‘s/([0-9]+)\(x([0-9]+)\)/\1 * \2/’) ))
fiecho -e “N: $N\n”;# run fluent in batch on the allocated node(s)
time fluent 2ddp -g -slurm -t$N -mpi=pcmpi -pib -i <journalFile> > fluentP1.out 2> fluentP1.err
Sourced and Adapted from: [10012014]

Posted in How-To-Guides