Skip to content

Latest commit

 

History

History

ex05

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
title author
Exercise 05 - Batch computing - running a script on LOTUS
Ag Stephens

Exercise 05: Batch computing - running a script on LOTUS

Scenario

Having established (in exercise 4) that I can extract the total cloud cover (TCC) variable from a single ERA-Interim file I now wish to extract that data from an entire month. I will write some simple scripts to batch up separate processes that run CDO to extract the TCC variable from a series of ERA-Interim files. Each run of the script will loop through 4 x 6-hourly files for one day. I will run it 30 times, once for each day in September 2018. Each run will be submitted to the LOTUS cluster.

Objectives

After completing this exercise I will be able to:

  • write scripts to batch up tasks
  • submit scripts to the LOTUS cluster

JASMIN resources

  • JASMIN account with SSH public key uploaded and jasmin-login privilege
  • login servers: login2.jasmin.ac.uk
  • sci servers: sci[1-8].jasmin.ac.uk
  • LOTUS batch processing cluster
  • common software: CDO (Climate Data Operators) tool
  • GWS (read/write): /gws/pw/j07/workshop
  • CEDA Archive (read-only): requires a CEDA account
  • help documentation at https://help.jasmin.ac.uk

Local resources

  • SSH client (to login to JASMIN)

Videos

You can follow this exercise by watching the videos below, or by following the text of this article, or a combination of both.

Task
Solutions & Discussion

Your task

This is the outline of what you need to do. The recommended way of doing each step is covered in the "Cheat Sheet" but you may wish to try solving it for yourself first.

  1. Your starting point is on a JASMIN login server (see exercise 01)
  2. SSH to a scientific analysis server
  3. Write an "extract-era-data.sh" wrapper script that calls the CDO extraction command
  4. Write a script, called "submit-all.sh", to loop over dates from 01/09/2018 to 02/09/2018 and submit the "extract-era-data.sh" script to LOTUS for each day
  5. Run the "submit-all.sh" script
  6. Examine which jobs are in the queue
  7. Examine the standard output and standard error files
  8. Modify "submit-all.sh" so that it will run for all 30 days in September 2018
  9. Re-run the "submit-all.sh" script
  10. Examine which jobs are in the queue
  11. Kill one of the jobs - just to see how it is done

Questions to test yourself

All too easy? Here are some questions to test your knowledge an understanding. You might find the answers by exploring the JASMIN Documentation

  1. You have learnt about some basic commands to interact with SLURM scheduler (such as sbatch and squeue). This manages the submission and execution of jobs via the LOTUS queues. Which other commands might be useful when interacting with the scheduler?
  2. Which queues are available on LOTUS? What is the difference between them? Why would you choose one over another?
  3. How can you instruct SLURM to allocate CPUs and memory to specific jobs when you run them? Can you change the allocations when the job is queuing?
  4. How can you cancel all your jobs in the SLURM queue?

Review / alternative approaches / best practice

This exercise demonstrates how to:

  1. Create a script that takes an argument to process a single component (day) of an overall task.
  2. Create a wrapper script that loops through all the components that need to be processed.
  3. Submit each component as a LOTUS job using the sbatch command.
  4. Define the command-line arguments for the sbatch command.
  5. Use other SLURM commands, such as squeue (to monitor progress) and scancel (to cancel jobs).

Alternative approaches could include:

  1. Write the output to a scratch directory

    1. There are two main scenarios in which you might write the output to a scratch directory:
      1. You only need to store the output file for temporary use (such as intermediate files in your workflow).
      2. You want to write outputs to scratch before moving them to a GWS.
    2. The Help page (https://help.jasmin.ac.uk/article/176-storage#diskmount) tells us that there are two types of scratch space:
      1. /work/scratch-pw2 – supports parallel writes
      2. /work/scratch-nopw2 – does NOT support parallel writes
    3. Since we do not need parallel write capability, we can use the "nopw" version.
    4. You need to set up a directory under "/work/scratch-nopw2" as your username:
     MYSCRATCH=/work/scratch-nopw2/$USER
     mkdir -p $MYSCRATCH
    
    1. Then you would write output files/directories under your scratch space, e.g.:
     OUTPUT_FILE=$MYSCRATCH/output.nc
     ...some_process... > $OUTPUT_FILE
    
    1. When you have finished with the file, tidy up (good practice).
     rm $OUTPUT_FILE
    
    1. Do not leave data on the "scratch" areas when you have finished your workflow. 1. Please remove any temporary files/directories that you have created. 1. You cannot rely on the data persisting in the "scratch" areas.
  2. Specify the memory requirements of your job:

    1. If your job has a significant memory footprint:
      1. Run a single iteration on LOTUS and review the standard output file to examine the memory usage.
      2. You can then reserve a memory allocation when you submit your subsequent jobs.

This demonstrates best practice:

  1. Build up in stages before running your full workflow on LOTUS:

    1. Check your code - is it really doing what you think it is doing?
    2. Run locally (on a sci server) for one iteration.
    3. Run for one or two iterations on LOTUS.
    4. Check everything ran correctly on LOTUS.
    5. Submit your full batch of jobs to LOTUS.
  2. Have any files been accidentally left on the system? (E.g. in /tmp/):

    1. It is important to clean up any temporary files that you no longer need.
    2. Please check whether the tools you use have left any files in "/tmp/".

Cheat Sheet

  1. Your starting point is on a JASMIN login server (see exercise 01)

  2. SSH to a scientific analysis server

     ssh sci5 # Could use any of sci[1-8]
    
  3. Write an "extract-era-data.sh" wrapper script that calls the CDO extraction command, that:

    1. Takes a date string ("YYYYMMDD") as a command-line argument

    2. Locates the 4 x 6-hourly input file paths for the date provided

    3. Activates environment containing the CDO tool

    4. For each 6-hourly file:

      1. Defines the output file path
      2. Run the CDO tool to extract the "TCC" variable from the input file to the output file
    5. If you are stuck, you can use the script located at:

      /gws/pw/j07/workshop/exercises/ex05/code/extract-era-data.sh

      [ Source: https://github.com/cedadev/jasmin-workshop/blob/master/exercises/ex05/code/extract-era-data.sh ]

  4. Write a script, called "submit-all.sh", to loop over dates from 01/09/2018 to 02/09/2018 and submit the "extract-era-data.sh" script to LOTUS for each day:

    1. You should define the following LOTUS directives:

      1. Standard output file - please ensure this is unique to each job by including the "%j" variable in the file name.
      2. Standard error file - please ensure this is unique to each job by including the "%j" variable in the file name.
    2. Queue name:

      1. We will use the main queue for quick serial jobs: short-serial
      2. NOTE: if working with a training account, you might need: --account=workshop --partition=workshop in your arguments.
    3. Job duration - to allocate a maximum run-time to the job, e.g.: "00:05" (5 mins)

    4. Estimated duration - to hint the actual run-time of the job, e.g.: "00:01" (1 min)

      1. Setting a low estimate will increase the likelihood of the job being scheduled to run quickly.
    5. The Help page on submitting LOTUS jobs is here: https://help.jasmin.ac.uk/article/4890-how-to-submit-a-job-to-slurm

    6. And use the "sbatch" command to submit each job.

    7. If you need some advice you can use the script at:

      /gws/pw/j07/workshop/exercises/ex05/code/submit-all.sh

      [ Source: https://github.com/cedadev/jasmin-workshop/blob/master/exercises/ex05/code/submit-all.sh ]

  5. Run the "submit-all.sh" script

  6. Examine which jobs are in the queue

    1. Type "squeue" to review any running jobs.
  7. Examine the standard output and standard error files.

  8. If you are happy that the job is doing the right thing, now modify "submit-all.sh" so that it will run for all 30 days in September 2018.

  9. Re-run the "submit-all.sh" script.

  10. Examine which jobs are in the queue

  11. Kill one of the jobs whilst it is still running - just to see how it is done:

    1. Use the "scancel" command:

       scancel <job_id>
      

Answers to questions

  1. You have learnt about some basic commands to interact with SLURM scheduler (such as sbatch and squeue). This manages the submission and execution of jobs via the LOTUS queues. Which other commands might be useful when interacting with the scheduler?

Table 3 of this help page shows other SLURM commands, such as scancel and scontrol. You can find out more by typing man <command> at the command-line, e.g.: man scancel.

  1. Which queues are available on LOTUS? What is the difference between them? Why would you choose one over another?

There is a LOTUS queues help page which explains the capabilities of each SLURM queue.

  1. How can you instruct SLURM to allocate CPUs and memory to specific jobs when you run them?

Table 2 of this help page lists common command-line parameters that can be used to instruct SLURM how to allocate CPUs, memory and hosts to certain jobs.

  1. How can you cancel all your jobs in the SLURM queue?

The following command will do it:

scancel -u $USER