Batch Jobs

Batch Jobs: Long Running computations

If a program has to run for a few hours or more, it should be prepared as a batch job and submitted to a cluster queue.  This is the only feasible, efficient way that a relatively large number of users in the campus can share a large computing resource like the HPC cluster.

Here is the gist of it:

The user needs to prepare the long running program (say, a script written in R, Mplus, Stata, or SAS) and a "submission script".  The submission script is the program that we use to ask the cluster scheduler to find some available compute nodes and send the job to the those nodes.  The simplest kind of chore is simply to launch a single, long running job.  Some jobs, however, are more interesting because they are parallel, meaning they divide up their work among several compute nodes and collect the results when they are finished.  Parallel computing is the essence of high performance computing.  We use that to run computer simulations or do massively parallel computations.

The big jobs we have been running fall into two groups.

  1. Lots of component jobs that run separately can be dispatched across many compute nodes by separate scripts. A simulation exercise may require thousands of repetitions, but they are separate from each other. We may write a shell script that creates hundreds or thousands of separate programs and program submission scripts. A job that can be split into many completely separate parts is said to be embarrassingly parallel. Its embarrassing because it is so easy.
  2. A job is truly parallel (that is, not embarrassing) if there is a main program that has computations done on several "threads." It assigns separate calculations to many compute nodes or cores and these threads in some sense need to communicate with each other. This kind of program is more difficult to prepare because one has to be cautious about making sure the different nodes are aware of what they ought to do, but it is also the most rewarding kind. If a master program is used to initiate all of the separate pieces, the results may be more believable to some computer scientists.

Two Vital Elements

  1. A submission script
  2. A program to be submitted by the submission script.

Here is an example submission script. This one is aimed to submit just one long-running R program.

The symbol "#MSUB" is a declaration that the scheduler is supposed to notice. While running, the job's name is "RparallelHelloWorld", that's how we can spot it while running.  This job is a one-core job, and only uses one processor, so we request exactly that amount.  The -M argument is your email address, and -m "bea" means to email you when (b) the job begins, when (e) it ends successfully, or (a) if it fails, or aborts. The -q argument specifies which queue the job is submitted to. Specifying sixhour allows a small job that runs in six hours or less to be sent outside of the CRMDA nodes. If crmda is supplied as -q, then the job will be restricted to CRMDA nodes. If you are unsure what queues you have permission to submit to, you can run "$ mystats" to get information about what queues you can use and which one is the default.

 

#!/bin/sh

#

#MSUB -M your-name_here@ku.edu

#MSUB -N RParallelHelloWorld 

#MSUB -q sixhour 

#MSUB -l nodes=1:ppn=1:ib

#MSUB -l walltime=00:05:00

#MSUB -m bea

cd $PBS_O_WORKDIR

mpiexec -n 1 R --vanilla -f parallel-hello.R

As one can see, there is a "boilerplate-ish" feeling in this script, about the only thing the user would worry about is the walltime allowed.  If we choose a number too small, the job will be canceled by the scheduler before it is done.  If we ask for a lot of time, the scheduler may make us wait until the cluster is not full of other jobs.

There is a separate file, "parallel-hello.R", in the same directory as the submission script.

By default, each node has 2gb of memory.  For jobs that demand more memory the user can specify the total job memory requirement.  In the example below the line: "#MSUB -l nodes=11:ppn=1:ib,mem=44gb" sets the total job working memory at 44gb, which is twice default of 22gb. Specifying more memory than is required for your job is a waste of resources, and it can cause your job to spend a longer period of time in the queue.

msub: Submit A Job

To submit the batch job, run this command:

$ msub sub-serial.sh

The submission number for your job will display in the console

7499366

It is running in the "background".  While the job runs, we can log off of HPC entirely, it will keep going.

When the job finishes, it creates 2 files,

1. Output file: RParallelHelloWorld.o749966
2. Error file: RParallelHelloWorld.e749966

If everything went well, the error file might be empty, or it might have a harmless comment or warning. Of course, as is usually the case with R, we might have asked the program to create some graphics or data files, and they should be available as well.

showq, canceljob: Check, and Delete Batch Jobs

Did the job run yet?  Is somebody else running too many jobs and clogging up the queue?

Check cluster status with showq

To check the status of the job, we run the command "showq" (this is similar to the old "qstat" command). This will produce three tables, the first will be a list of the active jobs. For example:

$ showq
Job ID USERNAME STATE PROCS REMAINING STARTTIME
7499370 pauljohn Running 1 60:00:00:00 Fri Sep 8 07:00:00

See https://researchcomputing.ku.edu/hpc/how-to#Commands for more information on the showq command and its arguments.

Remove requests with canceljob

If you decide you need to kill a job, run "canceljob" with the job number. 

$ canceljob 7499370

job '7499370' cancelled

To delete several jobs, you can use just one command, such as:

$ canceljob 710 711 712

job '710' cancelled

job '711' cancelled

job '712' cancelled

Perhaps that becomes tedious if you need to remove 100s of jobs you piled onto the queue by mistake.

We asked if there is a way to speed up the removal of a lot of jobs. The ITTC support staff offered a helpful answer:

for i in $(seq 1 1000); do canceljob $i; done

That deletes the jobs numbered 1 to 1000.


Other Ways to Check Status of Cluster

Viewpoint: A pleasant Web view of the situation

There is an overview of the cluster that can be accessed here:

https://view.crc.ku.edu

Instructions for using Viewpoint's features can be found on the KU HPC webpage:

https://researchcomputing.ku.edu/hpc/how-to#Viewpoint


CRMDA Calendar

Like us on Facebook
 
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times
Equity & Diversity Calendar

KU Today