...
The settings table contains only name/value pairs to describe the settings. Currently supported name/value pairs are:
- master – - The value field for this name field will contain which host is considered the master. Cron jobs only execute on the master.
- runTime – - This field controls how long the cron script will remain running before it shuts down.
...
The fields in the table are used as follows:
- jobID – - A unique ID that is automatically created when a new job is registered with the batch submission.
- lsfId – - Initially contains a value of 0 and will be filled with the ID provided by lsf once the job is submitted.
- command – - The full path of the executable to be submitted to lsf.
- args – - The arguments to be passed to the executable.
- queue – - The queue the job is submitted to.
- batchOpts – - A string containing the options to pass to bsub while submitting.
- status – - Field containing what the status of the job is.
- waiting – - The job is waiting to be submitted.
- submitting – - The bsub command is in process of executing.
- pending – - The job has been submitted to lsf but hasn't started executing yet.
- running – - The job has started execution.
- tries – - A counter indicating how many times the job has attempted to execute the bsub command (and failed).
- onSuccess* – - A string indicating what to do when the job is submitted, starts executing, or finishes.
- The format of the string is either script:/path/to/script or email:valid@email.address.
- workingDir – - Path to the working directory to use when running the job.
- outputLocation – - Full path to the file where the output of the job should be saved to.
- user – - The user to run this job under. Currently only supports the glast user accounts glastrm, glast, etc.
...
- /u/gl/glast/infraCron/batchSub.pl – - This script is executed by the cron system to submit jobs that are in the waiting stage.
- /u/gl/glast/infraBin/bsubChange.pl – - This script is executed every time the job changes state (pending to running to finished).
...
The database is stored on glastDB.slac.stanford.edu under the database name Workflow. The database contains the following tables:
- action – - This table contains the rules that determine which script to execute based on the rules set and the current finished script.
- forceTrigger – - This table contains entries about manually started runs in the workflow which were not triggered by the trigger script.
- forceTriggerSettings – - Options to pass to the workflow when forcing a run via the forceTrigger table.
- main – - The main table containing all the workflows known and the scripts for that workflow.
- running – - Instances of the registered workflows that are currently running.
- settings – - Settings for the workflow system.
- trigger – - Trigger scripts for defined workflows.
...
The webpage for the Workflow system is https://www.slac.stanford.edu/www-glast-dev/cgi/Workflow. The scripts to display this webpage are located in /afs/slac.stanford.edu/g/www/cgi-wrap-bin/glast/ground/cgi/Workflow. As mentioned earlier, these pages currently do not work due to graphviz installation problems.
...
The Release Manager is controlled by a bunch of scripts that are located in /u/gl/glastrm/ReleaseManager/, /u/gl/glast/perl-modules, /u/gl/glast/ReleaseManager, and /u/gl/glast/infraCron. The list of these script that require explanation is:
- trigger.pl – - This script is executed by the Workflow system to determine when a new job should be started.
- rmTodo.pl – - This script is executed to perform user initiated functions such as erasing builds, triggering builds, etc.
- All other scripts are fairly self explanatory and require arguments of the form packageversiontag.
For windows the trigger.pl and rmTodo.pl don't exist. Otherwise all other scripts exist in the windows equivalent in V:\Glast_Software\Toaster\tools\ReleaseManager.
Web interface
Many of the scripts in /u/gl/glastrm/ReleaseManager/src/linux can be run by hand when there are problems:
compile.pl GlastRelease-v20r0p1-rhel4_gcc34
test.pl GlastRelease-v20r0p1-rhel4_gcc34
createUserRelease.pl GlastRelease-v20r0p1-rhel4_gcc34
Web interface
The web page for the Release Manager is https://The web page for the Release Manager is https://www.slac.stanford.edu/www-glast-dev/cgi/ReleaseManager. It is controlled by the SLAC web server. The information is displayed by a bunch of perl scripts located in /afs/slac/g/www/cgi-wrap-bin/glast/ground/cgcgi/ReleaseManager.
Release Manager paths.
...
- /afs/slac/g/glast/ground/GLAST_EXT/tag – - Location of the external libraries to use while compiling.
- /nfs/farm/g/glast/u30/builds/ – - Location the builds performed by the Release Manager.
- /nfs/farm/g/glast/u05/extlib/ – - Location of the external libraries tarred up for the installer to use.
- /nfs/farm/g/glast/u09/binDist – - Location where the installer files of the Release Manager build are located.
- /nfs/farm/g/glast/u09/builds – - Old location where the Release Manager used to store builds.
- /nfs/farm/g/glast/u09/documentation – - The location where the doxygen documentation is stored.
- /nfs/farm/g/glast/u09/html – - The location where the output from checkout, build, and unit tests are stored.
...
- V:\Glast_Software\Toaster\GLAST_EXT – - Location of the external libraries.
- V:\Glast_Software\Toaster\tools\builds – - Location of the builds.
- The other paths are identical but are accessed via the windows path \\slaccfs\...
...
The database contains these tables:
- checkout – - Contains checkout information about the packages built for a particular build.
- checkoutPackage – - Contains information about a particular build.
- compile – - Contains compile information about the packages built for a particular build.
- exclude – - Contains builds that should not be built.
- settings – - Contains settings for the Release Manager.
- status – - Contains the status of builds currently in progress.
- test – - Contains unit test information about packages build for a particular build.
- todo – - Contains pending tasks for the Release Manager to perform.
...
The database for the Archiver is stored on glastDB.slac.stanford.edu with the database name of Archive. It contains the following tables:
- archiveContent – - The files archived.
- pending – - Contains pending archive entries.
- tarFiles – - Contains a list of tar files that belong to a single archive job and their locations in mstore/gstore/astore.
- task – - The main table containing the archive job.
...
The scripts for the Archiver are stored in /u/gl/glast/infraBin/Archiver. The scripts located there are:
- archive.pl – - The script invoked by the workflow when archiving.
- archiver.pl – - This script is the controller script with which users can archive, restore, delete jobs.
- delete.pl – - This script is invoked by the workflow when deleting archived files.
- determineTask.pl – - This script is invoked to determine if a job is for archiving, deleting, restoring, etc.
- finish.pl – - This script is invoked as the last script by the workflow to cleanup the database and mstore/gstore/astore.
- restore.pl – - This script is invoked for restore operations.
- trigger.pl – - This script is invoked by the workflow to determine if new archiving tasks are pending.
- verify.pl – This script is invoked for verify operations. - This script is invoked for verify operations.
All of these scripts are mostly frontends to the mstore/gstore/astore applications. The use the expect perl module to programmatically control tar which expects interactive input for creating and splitting tar files.
Here is an example how to archive a file CLHEP-1.9.2.2.tar.gz in directory /nfs/farm/g/glast/u05/extlib/tiger_gcc33:
/afs/slac.stanford.edu/u/gl/glast/infraBin/Archiver/archiver.pl --module "Manual" --callback "user:kiml" --path "/nfs/farm/g/glast/u05/extlib/tiger_gcc33" --user glast --name "u05.extlib.tiger_gcc33.CLHEP-1.9.2.2.tar.gz" --file CLHEP-1.9.2.2.tar.gz --method mstore archive
And to restore it to /nfs/farm/g/glast/u30/tmp:
/afs/slac.stanford.edu/u/gl/glast/infraBin/Archiver/archiver.pl --module "Manual" --callback "user:kiml" --path "/nfs/farm/g/glast/u30/tmp" --user glast --name "u05.extlib.tiger_gcc33.CLHEP-1.9.2.2.tar.gz" --file CLHEP-1.9.2.2.tar.gz --method mstore restore
Notification
The RM scripts notify the users of problems encountered during checkout, compile, or unit tests. This notification is done by the finish.pl script. It checks the database for every package that belonged to a build to determine if the package had any errors during checkout, build, or unit test compiles. If there were errors, the script checks who the author is for the failure and notifies the author. Additionally, all the failures are accumulated and sent to a general failure mailing list as specified by the settings pageAll of these scripts are mostly frontends to the mstore/gstore/astore applications. The use the expect perl module to programmatically control tar which expects interactive input for creating and splitting tar files.
Webpages
The web page for the Archiver is https://www.slac.stanford.edu/www-glast-dev/cgi/DiskArchive. It is controlled by the SLAC web server. The information is displayed by a bunch of perl scripts located in /afs/slac/g/www/cgi-wrap-bin/glast/ground/cg/archive.
...
The database is located on glastDB.slac.stanford.edu and is called dependency. The database consists of several tables:
- externals – - lists the external libraries, their versions, and the path for a particular package
- packages – - lists packages and their dependencies. Top level packages have the mainPkg entry set to true (1).
- pkgdep – - creates the dependency list between packages in the packages table
- tags – - a list of supported tags that can be downloaded.
...
The frontend script is what is known by others as the Installer. It is installed in ~glast/infraBin/installer.pl. This script reads the contents of the database using the glastreader account and displays choices for the user to download. The script works both on windows and linux provided that unzip and perl are available on windows.