Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

1/11/2012 Under Construction

TASK FUNCTION

setupRun.py
  • read & parse 'runList' file
  • identify run# and input files for this stream
  • calculate #clumps (substreams in subtask)
  • create env-vars for subtask
createClumps.jy
  • read env-vars
  • create task-level pipeline-vars for subTask
  • create subStreams (with pipeline-var list)
Wiki Markup
\[subtask\] processClump.py (all parallel analysis processing)
  • unpack pipeline-vars (as env-vars)
  • define skipEvents/lastEvent for Gleam
  • limit
  • infile staging disabled
  • infile slicing (skimmer) disabled
  • infile env-vars for Gleam
  • construct output filenames
  • output file staging
  • set output file env-vars for Gleam
  • prepare .rootrc
  • select and stage-in FT2 file
    • (Gleam setup)
  • select jobOptions file and customize
  • Run Gleam
    • (SVAC setup)
  • Wiki Markup
    Run svac \[disabled\]
  • Wiki Markup
    Run makeFT1 \[disabled\]
  • Finalize staging
  • create new subTask-level pipeline-vars with clump output info
Wiki Markup
\[subtask\] clumpDone.jy
  • (nothing!)
setupMerge.jy
  • unpack task-level pipeline variables
  • create new task-level pipeline vars from subTask vars
  • create two pipeline files: pipelineVarList.txt and clumpFileList.txt
mergeClumps.py
  • open and read the two pipeline files, store in dicts
  • stage in FT2 file
    • (merge files)
  • create tool-specific lists of files to be merged
  • loop over all file types to be merged
  • construct new output file name (with proper version)
  • if # files to be merged == 1, just use 'cp'
    • skimmer merge
  • stage-out output file
  • run skimmer
    • FT1 merge
  • stage-out output file
  • run fmerge
    • HADD merge (histograms)
  • stage-out output file
  • run hadd
    • (post-merge data product generation)
  • stage-in input MERIT file, if necessary
    • FT1
  • generate new output file name (wth proper version)
  • stage-out output file
  • runMakeFT1()
    • ELECTRONFT1
  • generate new output file name (wth proper version)
  • stage-out output file
  • runMakeFT1()
    • LS1
  • generate new output file name (wth proper version)
  • stage-out output file
  • runMakeLS1()
    • FILTEREDMERIT
  • generate new output file name (wth proper version)
  • stage-out output file
  • run skimmer
    • ELECTRONMERIT
  • generate new output file name (wth proper version)
  • stage-out output file
  • run skimmer
  • Finalize staging
  • Produce list of files for dataCat registration
runDone.jy
  • Unpack task-level pipeline vars
  • Register (merged) output file in dataCat
  • Make entry in HISTORYRUNS DB table

CODE

Directories

/nfs/farm/g/glast/u38/Reprocess-tasks

...

Task preparation

 

taskConfig.xml

task definition

genRunFile.csh*

generate list of input files for reprocessing

 

 

Pipeline code

 

envSetup.sh*

set up environment to run GR/ST/FT/etc (called by pipeline)

config.py

task configuration (imported by all .py)

setupRun.py*

setup for reprocessing a single run

createClumps.jy

create subprocess for processing a "clump" (part of a run)

processClump.py*

process a clump of data

clumpDone.jy

cleanup after clump processing

setupMerge.jy

setup for merging clumps

mergeClumps.py*

merge all clumps for single run

runFT1skim.sh*

skim FT1 events

runDone.jy

final bookkeeping after run reprocessed (dataCat and runHistory)

commonTools@

link to commonTools

 

 

Input data to pipeline code

 

doRecon.txt

Gleam job options

fullList.txt

List of reprocessing input data files

removeMeritColumns.txt

List of columns to remove from MERIT files

runFile.txt@

Sym link to fullList.txt

 

 

Pipeline control code

 

trickleStream.py*

task-specific config for trickle.py

...