...
Code Block |
---|
$ cat job-script.sh #!/bin/sh # run in LSF queue atlas-t3 and run up to 120 minutes (wall time) #BSUB -q atlas-t3 #BSUB -W 120 #BSUB -R "select[rhel60 && cvmfs && inet] rusage[scratch=5.0, mem=1000:decay=0]" # create a unique working directory on batch node's /scratch space myworkdir=/scratch/`uname`name -u`$$n`$$ mkdir $myworkdir cd $myworkdir # run payload task1 < input_of_task1 > output_of_task1 2>&1 & task2 < input_of_task2 > output_of_task2 2>&1 & wait # wait for the tasks to finish # save the output to storage, use either "cp" to copy to NFS spaces, or "xrdcp" to copy to the xrootd spaces cp myoutput_file /nfs/slac/g/atlas/u02/myoutput_file xrdcp myoutput_file root://atlprf01:11094//atlas/local/myoutput_file # clean up cd .. rm -rf $myworkdir $ bsub < job-script.sh # submit the job |
In the above script, the first two #BSUB directives tell LSF that the batch queue is "atlas-t3" and the wall time limit is 120 minutes. Please always specify a wall time. Otherwise, your jobs will be killed if they exceed the default after 30 minutes (wall time limit). The third #BSUB directives tell directive is optional. It tells LSF that the job wants to run on RHEL6 platform (rhel60) with cvmfs ("cvmfs") and outbound internet connection ("inet"), and that the job needs up to 5GB of space under /scratch, 1000MB of RAM (these are advises to the LSF scheduler, not caps or limits).
With the two "&" at the end of the task line lines (task1 and task2), the two tasks run simultaneously. If you want them to run sequentially, remove the two "&".
...