Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

P106 Reprocessing

status: Running Complete
last update: 28 30 June 2010

This page is a record of the configuration for the P106 reprocessing project. It targets 200 runs of L&EO data for processing with a new alignment calibration and other improvements to investigate issues with the Pass7 classification.

...

  • P106-LEO-MERIT - this task reads DIGI, runs the full reconstruction code in Gleam and produces reprocessed RECON + MERIT + CAL + GCR Wiki Markup\[possible future\] P106-LEO-FITS - this task will read MERIT and produce FT1 (photons)+ FT1

Datafile names, versions and locations

...

  • 29 June 2010 task continues with little impact on xroot/nfs servers (at least as ganglia is concerned). But wait. The processClump step ran without undue stress on xroot, but the mergeClumps was not so lucky. Both of the newest wains (60,61), which are 12-CPU machines, both got overloaded and lost contact with the world, as shown in the Ganglia plot.
    Image Modified
    By 21:00 task complete.
  • 28 June 2010 14:15 - begin full task...slowly
  • 27 June 2010 - update task with new calibration flavor, add back in 200th run, and fire off test stream (4)
  • 13 May 2010 - four test streams submitted...formally successful. Anders has given cursory glance at first few events of first stream (recon/merit only) and the log file, saying all looks okay. However, it has been discovered that the alignment issue thought to be an issue with the L&EO data is a non-issue. So put this task ON HOLD.
  • 11 May 2010 - Prepare task

...

ROOT version

v5.20.00-gl5

Skimmer version

v7r3p3-gl2

...

Time

processClump (~23k evts) - about ~45 CPU min (hequ), and ~65 CPU min on fell

mergeClumps (full run) - about 62 CPU min (hequ), mostly due to gtdiffrsp

P106-LEO-FT1

Status chronology

Configuration

...

All 200 runs were fully reprocessed within 30 hours elapsed time. A longer task will not necessarily scale from these numbers, as this elapsed time includes all phases of the task: ramp-up, steady-state, xroot troubles, ramp-down, clean-up. In addition, for the first part of this reprocessing the heavy-lifting jobs were run in the long batch queue which at the time had a per user limit of 1000 jobs (this has subsequently been changed to 'no limit'). The latter part of the task was run in the xlong queue which had no per user limit - but a global limit of 3000 jobs.

Space

(excerpted from the dataCatalog)

Name

Files

Events

Size

Created (UTC)

CAL

200

467,566,271

1.5 TB

13-May-2010 18:29:41

FT1

200

23,467,183

2.0 GB

13-May-2010 18:29:40

GCR

200

488,288,751

10.1 GB

13-May-2010 18:29:42

MERIT

200

488,288,751

368.6 GB

13-May-2010 18:29:42

RECON

200

488,268,806

6.4 TB

13-May-2010 18:29:41

Total xroot disk space (exclusive of /glast/Scratch) occupied by this task = 8.3 TB