You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »

Overview

This is an example to run SLIC  the Simulator for the Linear Collider on the FermiGrid which is part of the Open Science Grid. SLIC is a Geant4-based simulations package that uses an XML geometry input format called LCDD to describe geometry, sensitive detectors and readout geometry.  In this  example SLIC is tared  up and put on a  web accessible disk space. The grid jobs wgets the tar file unpacks it and runs SLIC on the stdhep files that is provided with the tar package.

This is only one way to do it. Other options include:

  • sending the tar file with the job submission
  • installing SLIC on nodes that are visible to the grid worker nodes

Prerequisites for sending jobs to the GRID

  1. get a DOE grid certificate from http://security.fnal.gov/pki/Get-Personal-DOEGrids-Cert.html
    This page also explains how to export the certificate from the browser and how to deal with directory permissions and such.
  2. register to the ILC VO (Virtual organization) at http://cd-amr.fnal.gov/ilc/ilcsim/ilcvo-registration.shtml that will guide you to:
    https://voms.fnal.gov:8443/vomrs/ilc/vomrs
  3. Everything is set up  on ILCSIM. So to try things out it is recommended to get an account on ILCSIM using the following form
    http://cd-amr.fnal.gov/ilc/ilcsim/ilcsim.shtml

If you don't use ILCSIM as the gateway the Virtual Data Toolkit (VDT) needs to be installed on the machine from where you want to submit jobs. And as always if you plan on running any grid services you'll need a host certificate for your machine(sad).

Examples

The SLIC test job below actually tries to store the output in mass storage using the grid srmcp file transfer tool. But It's probably easier to transfer the output via condor. An example will be provided once it has been tested.(smile)

/fnal/ups/grid/vdt/setup.sh
voms-proxy-init \-voms ilc:/ilc/detector
# give passwd etc.

To submit the job, do:

condor_submit mytestslicjob.run

where the job description file mytestslicjob.run looks like:

universe = grid
type = gt2
globusscheduler = fngp-osg.fnal.gov/jobmanager-condor
executable = /home2/ilc/wenzel/grid/test_slic.sh
transfer_output = true
transfer_error = true
transfer_executable = true
log = myjob.log.$(Cluster).$(Process)
notification = NEVER
output = myjob.out.$(Cluster).$(Process)
error = myjob.err.$(Cluster).$(Process)
stream_output = false
stream_error = false
globusrsl = (jobtype=single)(maxwalltime=999)
queue

which triggers the following script:

#/bin/sh -f{color}
wget{color} http://kyoto.fnal.gov/wenzel/SimDist.tgz
tar xzf SimDist.tgz
cd SimDist
printenv
scripts/slic.sh -r 5 -g sidaug05.lcdd -i ffHAA_2k.stdhep -o ffHAA_2k
ls ffHAA_2k.slcio
# This sets up the environment for osg
. $OSG_GRID/setup.sh
source $VDT_LOCATION/setup.csh
srmcp "file:///\{PWD\}/ffHAA_2k.slcio" "srm://cmssrm.fnal.gov:8443/srm/managerv1?SFN=/2/wenzel/slic/ffHAA_2k.slcio"

To run some commands directly on the grid head nodes use a syntax like this:

globus-job-run fngp-osg.fnal.gov/jobmanager-condor /bin/ls /grid/app
globus-job-run fngp-osg.fnal.gov/jobmanager-condor /usr/bin/printenv
globus-job-run fngp-osg.fnal.gov/jobmanager-condor /bin/df

The examples above show how to check what grid applications are installed, the runtime environment of a job and what file systems are mounted.

  • No labels