The GLAST "pipeline" is a software mechanism for organizing and executing massively parallel computing projects. Internally, the pipeline consists of a server application, web applications, Unix commands, and Oracle tables. Externally, the pipeline offers a general framework within which to organize and execute the desired data processing.
See also the Workbook version of the pipeline II users guide
Main organizational concepts of the pipeline include:
Task services offered by the pipeline include:
Operator services offered by the pipeline include:
Basic steps for using the pipeline:
From the Glast Ground Software Portal, click on Pipeline II. The Task Summary page will be displayed:
From the Task Summary, you can:
Note: When you click on the Clear button, the default list (All tasks) will be displayed.
If a task has failed, you can drill down from the task name, checking the status column until you find the stream that failed; then check the log file for that stream (accessible from the links located in the right-most column).
If a task is running, a Task Dependency Flow Chart (see below) will be displayed when you click on the task Name (e.g., TestSubTask1):
Note: If you have SLAC Unix or Windows userid/password, you can also Login to Pipeline Admin. | | Back to Top |
From the Pipeline Admin GUI, you can upload a pipeline task, create a stream, restart the server, and delete a task:
As an alternative to using the web interface to control the pipeline it is also possible to use command line tools to achieve the same goals.
To get details on using the pipeline II command line tools enter
/afs/slac/g/glast/ground/bin/pipeline -h
Which currently gives:
Usage: pipeline [-options] <command> parameters: <command> Command to execute, one of: info load restart createStream shutdown ping options: --help Show this help page; or if <command> specified, show Command-Specific help --mode <mode=PROD> Specify Data Source {PROD, DEV}
Get Command-Specific help for command 'createStream':
/afs/slac/g/glast/ground/bin/pipeline -h createStream
will display:
Command-specific help for command createStream Usage: pipeline createStream [-options] <task> [file1 file2...] parameters: <task> Task name (and optional version) for which to create the new stream. [file1 file2...] Space seperated list of filenames to make available to the stream. options: --stream <Stream ID=-1> Integer stream identifier. Auto assigned if option not specified. --define <name=value> Define a variable. Syntax is "name=value"
Create a stream:
/afs/slac/g/glast/ground/bin/pipeline -m PROD createStream -S 2 -D "downlinkID=060630001,numChunks=10,productVer=0" -D "fastCopy=0" CHS-level1
This will create a stream with StreamID=2 for the task CHS-level1 and define the variables "downlinkID=060630001,numChunks=10,productVer=0" and "fastCopy=0" (you can use multiple variable definition options.) The stream will be created in the PROD server as specified by the -m (datasource mode) flag.
The pipeline also has a Java API which is callable from other Java programs. This is used by the GLAST data server. This is packaged as part of the org-glast-pipeline-client package.
See the JavaDocs for the PipelineClient class for more details.
When editing an XML file for the pipeline, you are encouraged to use an editor which can validate XML files against XML schema, since this will save you a lot of time. EMACS users may be interested in this guide to using XML with EMACS.
Warning
Everything beyond this point is a big mess and probably wrong.
Batch jobs will always have the following environment variables set:
Variable |
Usage |
---|---|
PIPELINE_PROCESSINSTANCE |
The internal database id of the process instance |
PIPELINE_STREAM |
The stream number |
PIPELINE_STREAMPATH |
The stream path. For a top level task this will be the same as the stream number, for sub-tasks this will be of the form i.j.k |
PIPELINE_TASK |
The task name |
PIPELINE_PROCESS |
The process name |
The pipeline object provides an entrypoint for communicating with the pipeline server in script processes. Below is a summary of the functionality currently available.
Please see the JavaDoc page for the pipeline java interface.
The datacatalog object provides an entrypoint for communicating with the datacatalog service in script processes. Below is a summary of the functionality currently available.
Registers a new Dataset entry with the Data Catalog.
dataType is a character string specifying the type of data contained in the file. Examples include MC, DIGI, RECON, MERIT. This is an enumerated field, and must be pre-registered in the database. A Pipeline-II developer can add additional values upon request.
Note: Maximum length is 20 characters.
logicalPath is a character string representing the location of the dataset in the virtual directory structure of the Data Catalog. This parameter contains three fields: the "folder", (optional) "group", and the dataset "name". The encoding is "/path/to/folder/group:name" -- if the optional group specification is ommited, the encoding is "/path/to/folder/name".
Example: /ServiceChallenge/Background/1Week/MC:000001 represents a dataset named "000001" stored in a group named "MC" within the folder "/ServiceChallenge/Background/1Week/".
Example: /ServiceChallenge/Background/1Week/000001 represents a dataset named "000001" stored directly within the folder "/ServiceChallenge/Background/1Week/".
Note: Maximum length is 50 characters for each subdirectory name and 50 characters for group name and 50 characters for dataset name.
filePath is a character string representing the physical location of the file. This parameter contains two fields: the "file path on disk" and the (optional) "site" of the disk cluster. The encoding is "/path/to/file@SITE". The default site is "SLAC".
Example: /nfs/farm/g/glast/u34/ServiceChallenge/Background/1Week/Simulation/1Week/AllGamma/000001.MC@SLAC
Note: Maximum file-path length is 256 characters, maximum site length is 20 characters.
attributes [optional] is a colon-delimited character string specifying additional attributes with which to tag the file. The encoding is "a=1:b=apple:c=23.6". All attribute values are stored in the database as ascii text. No expression evaluation is performed.
Example: mcTreeVer=v7r3p2:meanEnergy=850MeV
Note: Maximum length is 20 characters for attribute name and 256 characters for attribute value.
By default, attribute values are stored in the database as Strings. You can force storage of the value as a Number or Timestamp data-type by using the following naming convention (the "name" part of the "name=value" attribute definition string):
Number: "^n[A-Z].*"
ex: nRun, nEvents
Timestamp: "^t[A-Z].*"
ex: tStart, tStop
String: Everything else