Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Table of Contents

ILC VO

Firstly, you need to become obtain a member of membership in the ILC ilc Virtual Organization (VO), which is sponsored by DESY. This is a VO on the LCG grid. Alternately, you can become a member of the calice VO, but this will only work if your institution is actually a member of this collaboration.

Follow the workflow at https://grid-voms.desy.de:8443/vovoms/ilc/register/vomrsstart.action which will require several rounds of confirmations and emails.

Somewhat confusingly, there is another ILC VO run by Fermilab located at https://voms.fnal.gov:8443/vomrs/ilc/vomrs.

Setup

Info

The rest of the tutorial assumes that you are running from an lxplus node at CERN and using the bash shell.

This script will command can be used to setup the LCG grid tools at CERNenvironment using tools on the DESY AFS file system.

No Format
source /afs/cerndesy.chde/group/projectgrid/gdUI/LCGGLITE-share/currentpro/etc/profile.d/grid_env.sh

There is also exists a script on DESY afssetup script maintained at CERN.

No Format
source /afs/desycern.dech/groupproject/gridgd/UI/GLITE-proLCG-share/current/etc/profile.d/grid_env.sh

The DESY afs script seems to work the best. ( I am having trouble running EDG commands with the CERN setup script.)

VOMS Server

It is my understanding that before you do anything, you need to authenticate with a proxy server called a "VOMS server"At the beginning of each session, a proxy certificate needs to be obtained.

No Format
voms-proxy-init -verify -debug -voms ilc

This needs to command should be done executed at the start of every session in which jobs are going to be submitted.

If you get the message VOMS Server for ilc not known! when trying to run this command, or any other commandif other similar error messages occur when running other grid commands, then the information about the ILC VOMS server is missing and needs to be installed at your site.

The ILC VO information can be found on the VOMS at DESY page.

The ilc VOMS file information can be found at downloaded from http://grid.desy.de/etc/vomses/ilc-grid-voms.desy.de and . The contents of this file should be added to the list of VOMS configuration data kept in ~/.glite/vomses, or the site's grid administrator can add this information to . Alternately, your system administrator can install the ilc VO information into a central config file.

These The following commands will download the VOMS file and add the information to the user 's config file in the home directory.

No Format
cd ~
mkdir .glite # only needed if .glite does not exist already-p ~/.glite/vomses
cd ~/.glite/vomses
wget http://grid.desy.de/etc/vomses/ilc-grid-voms.desy.de # fetch the VOMS file
touch vomses
cat ilc-grid-voms.desy.de >> vomses # add ILC VOMS

Now the voms-proxy-init command should work correctly.


Now, the voms-proxy-init command should work correctly.

No Format

voms-proxy-init -verify -debug -voms ilc

LCG File System

In order to use the LCG file system, the LFC_HOST variable must be setup.

No Format

export LFC_HOST=`lcg-infosites --vo ilc lfc`

List files on the mass storage system at DESY.

No Format

lfc-ls -l  /grid

Put a file into DESY mass storage.

No Format

echo "test" > /tmp/test_file
lcg-cr -v --vo ilc -l lfn:/grid/ilc/test/test_file file:/tmp/test_file -d srm-dcache.desy.de

Copy a file from mass storage to local disk.

No Format

lcg-cp -v --vo calice lfn:/grid/ilc/test/test_file file:/tmp/test_file

Copying one of the Standard Model background files from a local NFS disk to the DESY dcache system and registering into the grid catalog.

No Format

> . /afs/desy.de/group/grid/UI/GLITE/etc/profile.d/grid-env.sh
> export LFC_HOST=grid-lfc.desy.de
> voms-proxy-init -debug -verify -voms ilc
> lcg-cr -v --vo ilc -n 10 file:/nfs/slac/g/lcd/ilc_data/ILC500/StandardModel/250fb-1_-80e-_+30e+_polarization-003.stdhep
  -l /grid/ilc/mc/ILC500/SM250fb-1/generated/250fb-1_-80e-_+30e+_polarization-003.stdhep
  -d srm://srm-dcache.desy.de/pnfs/desy.de/ilc/mc/ILC500/SM250fb-1/generated/250fb-1_-80e-_+30e+_polarization-003.stdhep

Listing the available SM files

No Format

lfc-ls -l /grid/ilc/mc/ILC500/SM250fb-1/generated/

Fetching a Grid LCIO File to SLAC from DESY mass storage

Here is a more advanced script showing how to fetch a file to SLAC, assuming that you have an lxplus account at CERN.

No Format

# ssh to cern
ssh lxplus

# setup grid tools
source /afs/cern.ch/project/gd/LCG-share/current/etc/profile.d/grid_env.sh

# start a grid session
No Format

voms-proxy-init -verify -debug -voms ilc

# copy the file to temp location on lxplus (it's fast)
lcg-cp -v --vo ilc lfn:/grid/ilc/mc-2008/simulated/LDC01_05Sc/ZPole/M-6-5p2_ZPoleuds_LDC01_05Sc_LCP_Test_ZPole_0001.slcio file:/tmp//M-6-5p2_ZPoleuds_LDC01_05Sc_LCP_Test_ZPole_0001.slcio

# copy file to slac (slow...go get some coffee)
scp M-6-5p2_ZPoleuds_LDC01_05Sc_LCP_Test_ZPole_0001.slcio jeremym@iris01.slac.stanford.edu:/afs/slac/g/lcd/public_data/ILC/test/mokka/

The user name jeremym should be replaced above with your real SLAC account name.

Test Commands

We will use the European Data Grid (EDG) tools as the UI for testing purposesThe GLITE UI will be used to run test jobs. It uses commands starting with "edg-" for job submission, monitoring, etc.

If not running from DESY, many of the EDG commands need to be told about the location of the ILC Request Brokers (RB) for job submissionedg commands will require a config file for the ilc VO, which can be copied from a DESY afs location.

In another shell on lxplus, execute the following commands.

...

Now follow the instructions for submitting a test job at http://grid.desy.de/ilc/ under "Submitting jobs to the Grid".

Look for resources to run a job.

No Format
edg-job-list-match --config-vo ./edg_wl_ui.conf test.jdl

...

No Format
edg-job-submit --nogui --config-vo ./edg_wl_ui.conf -o test.jid test.jdl

Test Check the job status.

No Format
edg-job-status -i test.jid

...

No Format
edg-job-get-output -i test.jid

List files on the mass storage system at DESY.

Running SLIC

Simple Shell Script

Create a shell script slic.sh.

No Format

#!/bin/sh
wget http://www.lcsim.org/dist/slic/slic_v2r3p0_geant4_v9r0_i686_linux-gnu.tar.gz
tar xzf ./slic_v2r3p0_geant4_v9r0_i686_linux-gnu.tar.gz
wget http://www.lcsim.org/detectors/acme0605/acme0605.lcdd
printenv
./SimDist/scripts/slic.sh -g ./acme0605.lcdd -r 1
ls *.slcio

The above script does the following.

  1. Downloads a tarball with the slic binary and untars it.
  2. Downloads a detector XML file.
  3. Prints the environment.
  4. Runs one event (single muon) on the detector file.
  5. Prints a list of LCIO files created.

Shell Script Using DESY Mass Storage

Here is another version of the script that uses grid commands to fetch a stdhep file and upload the output LCIO file using DESY mass storage. (This example would only work if you have been granted access to DESY mass storage.)

No Format

#!/bin/sh
printenv
wget http://www.lcsim.org/dist/slic/slic_v2r3p0_geant4_v9r0_i686_linux-gnu.tar.gz
tar xzf ./slic_v2r3p0_geant4_v9r0_i686_linux-gnu.tar.gz
wget http://www.lcsim.org/detectors/acme0605/acme0605.lcdd
lcg-cp -v --vo ilc lfn:/grid/ilc/test/test.stdhep file:`pwd`/test.stdhep
./SimDist/scripts/slic.sh -g ./acme0605.lcdd -i ./test.stdhep -r 1
ls -la
lcg-cr -v --vo ilc -l lfn:/grid/ilc/test/outfile.slcio file:`pwd`/outfile.slcio -d srm-dcache.desy.de
No Format

export LFC_HOST=`lcg-infosites --vo ilc lfc`
lfc-ls -l /grid/ilc/test/outfile.slcio

JDL File

Now make a file slic.jdl.

No Format

Executable         = "slic.sh";
StdOutput         /grid
 = "out";
StdError           = "err";
InputSandbox       = {"slic.sh"};
OutputSandbox      = {"out","err","outfile.slcio"};

The OutputSandbox contains a list of files that will be cached for retrieval later, including the output LCIO file.

Job Submission, Monitoring, and Output Retrieval

To run the SLIC test job.

No Format

edg-job-submit --nogui --config-vo ./edg_wl_ui.conf -o slic.jid slic.jdl

Check the job status.

No Format

edg-job-status -i slic.jid

Retrieve the output. By, default the job output will go into a directory in /tmp so we specify the current directory instead.

No Format

edg-job-get-output --dir `pwd` -i slic.jid

gLite User Guide
Virtual Data Toolkit (VDT)
CERN AFS UI Setup - setup instructions on lxplus@cern