Using a release
The xrootd tools used by FGST are in the release directory of the glastdat user. Sourcing the wk_env script will set the PATH to a release bin directory and set the PYTHONPATH for the python packages.
Code Block |
---|
% cd /u/gl/glastdat/releases/admincurrent/
% . bin/wk_env
|
Running commands remotely on servers
The ScaRunOnHosts script will take a list of FGST xrootd servers and a command. It will ssh to all of the servers in the list and execute the command directly on the server. The basic usage is:
Code Block |
---|
% ScaRunOnHosts -s wain018,wain034 -r -e -- <command>
% ScaRunOnHosts -f glast -r -e -- <command>
|
It should be run from a release as described above. The available options are:
-p : ssh to all host in parallel
-w : wait for all hosts do be done. Only useful with -p option.
-e : Setup the release environment on the remote host. With out this option the
command to be run has to be found in the default PATH of the glastdat
-s srv1,srv2,..,srvn : list of servers to run the command
-f <flist> : read server list from a file. The file is looked for in /afs/slac.stanford.edu/g/glast/applications/xrootd/config/
Anchor |
---|
| PurgeOldRecon |
---|
| PurgeOldRecon |
---|
|
Remove old recon filesThe script reads from a file the list of recon files that can be purged from a xrootd server. Purging a file includes the following steps:
#) Check if file is on disk
#) Check if file is marked as migrated. On Solaris this means comparing a files mtime with the corresponding .lock files. On linux xattrs are used instead.
#) If the file is marked as migrated check in HPSS if the file exists and compare file sizes.
#) remove file (and .lock file) if the check in HPSS succeeded.
Code Block |
---|
title | Usage SrvPurgeFiles.sh |
---|
borderStyle | solid |
---|
|
SrvPurgeFiles.sh [options] pass
options:
-l write output to log file
-n number of files to purge
-T test, print only command and exit
pass: P202 | P300
|
Example:
Code Block |
---|
% cd /u/gl/glastdat/releases/admin/current
% . bin/wk_env
% ScaRunOnHosts -f glast -r -e -- SrvPurgeFiles.sh -l P202
% ScaRunOnHosts -f glastsol -p -w glast -r -e -- SrvPurgeFiles.sh -l P300
|
The above commands will ssh to all xrootd servers and do the purge. Log files are written to the directory /var/adm/mps/logs (Solaris) or /var/adm/frm/logs (Linux) and the logfile name is purge_file.YYYYMMDDTHHMMSS. As all output from SrvPurgeFiles.sh_ is written to a log file one will not see any output from the command above. The first(second) example would purge the recon files that are succeeded by the P202(P300) reprocessing.
Migration to HPSS
Migrate files using the daily file lists
Code Block |
---|
ScaRunOnHosts -f glast -e -r -- SrvMigrateFiles.sh -b [-r] [-e]
|
The -r migrates the recon files, -e use the extra-backup file lists.
re-migrate a file
- Create a file, <file-list>, with filenames that need to be remigrated (/glast/....)
Mark file on server as migratable (Linux):
Code Block |
---|
frm_admin mark -m |
...
Run migration
Code Block |
---|
SrvMigrateFiles.sh <file-list>
|
Migration commands
Show the current migration status
ssh to all data server and tail the last few lines of the most recent migrate log file. Count the number of migrated files.
Code Block |
---|
ScaRunOnHosts -f glast -e -r -- SrvMigrStat.sh |
Stop migration using semaphore file
The migration tools check for a semaphore file (/var/adm/frm/STOP_MIGR or /var/adm/mps/STOP_MIGR) and will exit if it exists. To set remove or list the semaphore files on all servers run:
Code Block |
---|
ScaRunOnHosts -f glast -e -r --SrvMigrateFlags.sh [list | rm | set] l
|