...
- (This page): Define ingredients of reprocessing (processing code/configuration changes)
- Processing History database: http://glast-ground.slac.stanford.edu/HistoryProcessing/HProcessingRuns.jsp?processingname=P115-LEO
- List of all reprocessings
- List of all data runs reprocessed
- Pointers to all input data files (-> dataCatalog)
- Pointers to associated task processes (-> Pipeline II status)
- Data Catalog database: http://glast-ground.slac.stanford.edu/DataCatalog/folder.jsp
- Lists of and pointers to all output data files
- Meta data associated with each output data product
...
Task Location | /nfs/farm/g/glast/u38/Reprocess-tasks/P115-LEO-MERIT |
Task Status | http://glast-ground.slac.stanford.edu/Pipeline-II/index.jsp |
GlastRelease | v17r35p1gr02 |
Skimmer | 07-07-00 |
Input Data Selection | Set of 200 runs selected by Anders Borgland |
Input Run List | ftp://ftp-glast.slac.stanford.edu/glast.u38/Reprocess-tasks/P115-LEO-MERIT/config/runFile.txt |
photonFilter | evtClassDefs v0r6p1 CTBParticleType==0 && CTBClassLevel>0 |
electronFilter | CTBParticleType==1 |
jobOpts | ftp://ftp-glast.slac.stanford.edu/glast.u38/Reprocess-tasks/P115-LEO-MERIT/config/reClassify.txt |
Output Data Products |
...
Task Location | /nfs/farm/g/glast/u38/Reprocess-tasks/P115-LEO-FT1 | ||
Task Status | http://glast-ground.slac.stanford.edu/Pipeline-II/index.jsp | ||
Input Data Selection | MERIT (from P106-LEO-MERIT), FT2 (from P110-FT2) | ||
Input Run List | ftp://ftp-glast.slac.stanford.edu/glast.u38/Reprocess-tasks/P115-LEO-FT1/config/runFile.txt | ||
evtClassDefs | 00-16-00 | ||
meritFilter | pass7_FSW_cuts, | ||
eventClassifier | Pass7_Classifier.py | ||
ScienceTools | 09-17-00 (SCons build) | ||
Code Variant | redhat4-i686-32bit-gcc34 or redhat5-i686-32bit-gcc41<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="d02b6a67-0868-4973-bd92-f0bc4d02cea6"><ac:plain-text-body><![CDATA[ | ||
Diffuse Model | /afs/slac.stanford.edu/g/glast/ground/releases/analysisFiles/diffuse/v2/source_model_v02.xml [ [Ref | https://confluence.slac.stanford.edu/display/SCIGRPS/Diffuse+Model+for+Analysis+of+LAT+Data]] | ]]></ac:plain-text-body></ac:structured-macro> |
Diffuse Response IRFs | P7_v2_diff, P7_v2_extrad, P7_v2_datac | ||
IRFs | implemented as 'custom irf', files in /afs/slac.stanford.edu/g/glast/ground/PipelineConfig/IRFS/Pass7.2 | ||
Output Data Products |
...
Note on diffuse response calculation: gtdiffrsp is called three times in succession. The first time with IRF P7_v2_diff and evclsmin==8, followed by IRF P7_v2_extrad and evclsmin==9, and finally IRF P7_v2_datac and evclsmin==10. The resulting FT1 file has six columns of diffuse response, two columns (galactic and extragalactic response) for each of the three IRFs. This creates a non-standard FT1 file by FSSC standards as they expect only five diffuse response columns.
Timing and Scaling
...
Timing is dominated by the gtdiffrsp steps in the mergeClumps job step. There is a wide range of processing times due to: different classes of batch machines; different numbers of events to process; batch jobs being temporarily put into system suspend (SSUSP) states \ [due to pre-emptive queues\]; or, possibly, due to processing dependencies in the data. Elapsed processing time for a single run ranges from 4 to 12+ hours (assuming no shortage of batch machines). The average CPU time per clump is 360 min +/- 160 min.
Thirteen of 200 jobs required xxl batch queue and then took >30 hours to complete.