Overview
This is an example to run jobs and especially SLIC , the Simulator for the Linear Collider, on the FermiGrid which is part of the Open Science Grid. SLIC is a Geant4-based simulations package that uses an XML geometry input format called LCDD to describe geometry, sensitive detectors and readout geometry. In this example SLIC is tared up and put on a web accessible disk space. The grid jobs wgets the tar file unpacks it and runs SLIC on the stdhep files that is provided with the tar package.
This is only one way to do it. Other options include:
- sending the tar file with the job submission
- installing SLIC on nodes that are visible to the grid worker nodes
Prerequisites for sending jobs to the GRID
- get a DOE grid certificate from http://security.fnal.gov/pki/Get-Personal-DOEGrids-Cert.html
This page also explains how to export the certificate from the browser and how to deal with directory permissions and such. - register to the ILC VO (Virtual organization) at http://cd-amr.fnal.gov/ilc/ilcsim/ilcvo-registration.shtml that will guide you to:
https://voms.fnal.gov:8443/vomrs/ilc/vomrs - Everything is set up on ILCSIM. So to try things out it is recommended to get an account on ILCSIM using the following form
http://cd-amr.fnal.gov/ilc/ilcsim/ilcsim.shtml
Setting up your own gateway to the grid is beyond the scope of this write up. It involves installing the Virtual Data Toolkit (VDT) , you'll need a host certificate for the gateway machine etc. etc. For an administration guide see the Fermi Grid web page.
Examples
The SLIC test job below actually tries to store the output in mass storage using the grid srmcp file transfer tool. But It's probably easier to transfer the output via condor. An example will be provided once it has been tested.
source /fnal/ups/grid/vdt/setup.sh voms-proxy-init -voms ilc:/ilc/detector # give passwd etc.
To submit the job, do:
condor_submit mytestslicjob.run
where the job description file mytestslicjob.run looks like:
universe = grid type = gt2 globusscheduler = fngp-osg.fnal.gov/jobmanager-condor executable = /home2/ilc/wenzel/grid/test_slic.sh transfer_output = true transfer_error = true transfer_executable = true log = myjob.log.$(Cluster).$(Process) notification = NEVER output = myjob.out.$(Cluster).$(Process) error = myjob.err.$(Cluster).$(Process) stream_output = false stream_error = false globusrsl = (jobtype=single)(maxwalltime=999) queue
which triggers the following script:
#/bin/sh -f wget{color} http://kyoto.fnal.gov/wenzel/SimDist.tgz tar xzf SimDist.tgz cd SimDist printenv scripts/slic.sh -r 5 -g sidaug05.lcdd -i ffHAA_2k.stdhep -o ffHAA_2k ls ffHAA_2k.slcio # This sets up the environment for osg . $OSG_GRID/setup.sh source $VDT_LOCATION/setup.csh srmcp "file:///\{PWD\}/ffHAA_2k.slcio" "srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/2/wenzel/slic/ffHAA_2k.slcio"
To run some commands directly on the grid head nodes use a syntax like this:
globus-job-run fngp-osg.fnal.gov/jobmanager-condor /bin/ls /grid/app globus-job-run fngp-osg.fnal.gov/jobmanager-condor /usr/bin/printenv globus-job-run fngp-osg.fnal.gov/jobmanager-condor /bin/df
The examples above show how to check what grid applications are installed, the runtime environment of a job and what file systems are mounted. To check for available SLIC/SimDist distributions type:
globus-job-run fngp-osg.fnal.gov/jobmanager-condor /bin/ls /grid/app/ilc/detector/SimDist/
Checking and killing your jobs
You can see the status of all jobs using the following command:
condor_q
Or to check the jobs submitted by user <username>:
condor_q -submitter <username>
You can view information about all requests with the following command:
condor_status -submitters
To cancel a job type condor_rm followed by the job number:
condor_rm <job number>