You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Introduction

Terminology

Task

A top level definition of work to be performed by the pipeline. A task may consist of one or more processes, and zero or more nested subtasks. Tasks are defined by an XML file.

Stream

A stream represents a single request to run the processes within a task. Streams always have an associated stream number which must be unique within each task. The stream number is always set at create stream time, either explicitly by the user or impl.

Sub-Task

A task contained within a parent task. 

Sub-Stream 

A stream corresponding to a sub-task. 

Process

A single step within a task or subtask. Each process must be either a script or a job.

Job

A process which results in a batch job being run.

Script

A process which results in a script being run inside the pipeline server itself. These small scripts are typically used to perform simple calculations, set variables, create subtasks, or make entries in the data catalog. Scripts can call functions provided by the pipeline itself, as well as additional functions for adding entries to the data catalog.

Variables 

Pipeline variables can be defined in a pipeline XML file, either at the task level, or at the level of individual processes. They can also be defined at create stream time. Processes inherit variables from

  1. The task which contains them
  2. Any parent task of the task which contains them
  3. Any variables defined at create stream time.
  1. Variables set by any process instance on which they depend (recursively)

Variables from other processes or tasks can also be accessed using the pipeline object.

Web Interface

XML Reference

Batch Jobs

Batch jobs will always have the following environment variables set:

Variable

Usage

PIPELINE_PROCESSINSTANCE

The internal database id of the process instance

PIPELINE_STREAM

The stream number

PIPELINE_STREAMPATH

The stream path. For a top level task this will be the same as the stream number, for sub-tasks this will be of the form i.j.k

PIPELINE_TASK

The task name

PIPELINE_PROCESS

The process name

Command Line Tools

To get details on using the pipeline II client try

~glast/pipeline-II/pipeline help

Which currently gives:

Syntax:

   pipeline <command> <args>

where command is one of:

   createStream <task> <stream> <env>

      where <task>   is the name of the task to create (including optional
                     version number)
            <stream> is the stream number to create.
            <env>    are environment variables to pass in, of the form
                     var=value\{,var=value...\}

Example

~glast/pipeline-II/pipeline createStream CHS-level1 2 "downlinkID=060630001,numChunks=10,taskBase=/nfs/farm/g/glast/u23/ISOC-devel/Pipelines/CHS-level1,productVer=0,fastCopy=0"

Pipeline Objects

The "pipeline" Object

The pipeline object provides an entrypoint for communicating with the pipeline server in script processes.  Below is a summary of the functionality currently available.

pipeline API 

The "datacatalog" Object

The datacatalog object provides an entrypoint for communicating with the datacatalog service in script processes.  Below is a summary of the functionality currently available.

datacatalog API

registerDataset(String dataType, String logicalPath, String filePath) 

Registers a new Dataset entry with the Data Catalog.

dataType is a character string specifying the type of data contained in the file.  Examples include MC, DIGI, RECON, MERIT, etc.  This is an enumerated field, and must be pre-registered in the database.  A Pipeline-II developer can add additional values upon request.

logicalPath is a character string representing the location of the dataset in the virtual directory structure of the Data Catalog. 

  • No labels