...
The reprocessing is usually split in three steps :
- Collect the files than needs to be reprocessed
- Prepare the files in "bunches", so that the reprocessing task (only one is running) will submit a bunch of jobs before entering in a sleep period. This is done to not kill the pipeline.
- Submit the reprocessing task. This has to be a process than run in the background. Usually I open a terminal using a FASTX session, so I can go back to it any time, but I don't have to keep the connection active.
- Apply BTI
(expand for details):
Expand |
---|
title | Step 1 : Collect the files than needs need to be reprocessed |
---|
|
I have a file called "preparelist.py" that helps querying the data catalog and save the list of runs in a appropriate text file, which will be used in the next step. The file (/nfs/farm/g/glast/u38/Reprocess-tasks/P310-FT2/preparelist.py) looks like this: Code Block |
---|
| #!/usr/bin/env python
import sys,os
def run(cmd,test=False):
'''
Simple interface to excecure a system call
'''
print cmd
if not test: os.system(cmd)
pass
def extractRunNumber(out_file_list,out_run_list):
'''
Extract run number from the name of the file
'''
runs=[]
out_run_list_file=file(out_run_list,'w')
for l in file(out_file_list,'r').readlines():
run=l.split('_v')[0].split('_r0')[-1]
runs.append(run)
out_run_list_file.write(run+'\n')
pass
return runs
#############################################################
# REPROCESSED 310:
# MINIMUM RUN NUMEBR TO BE REPROCESSED
RunMin='239557414'
# MAXIMUM RUN NUMEBR TO BE REPROCESSED
RunMax='604845703'
RunMax='625881605' #2020-11-01 00:00:00
#############################################################
out_file_list = 'FileList_%(RunMin)s_%(RunMax)s.txt' % locals()
out_run_list = 'RunsList_%(RunMin)s_%(RunMax)s.txt' % locals()
p310_file_list = 'P310_FileList_%(RunMin)s_%(RunMax)s.txt' % locals()
p310_run_list = 'P310_RunsList_%(RunMin)s_%(RunMax)s.txt' % locals()
p310_remaining_run_list = 'P310_Remaining_RunsList_%(RunMin)s_%(RunMax)s.txt' % locals() \
#############################################################
# This is the list of file in the datacatalog:
cmd="/afs/slac.stanford.edu/u/gl/glast/datacat/prod/datacat find --mode PROD --site SLAC_XROOT --group FT2 --filter 'RunMin >=%(RunMin)s && RunMin<=%(RunMax)s' --sort nRun --show-non-ok-locations /Da\
ta/Flight/Level1/LPA > %(out_file_list)s" % locals()
# --display 'RunMin' > $out_list
run(cmd,test=False)
to_process=extractRunNumber(out_file_list,out_run_list)
print 'split -l25 ../%(out_run_list)s -a 3' % locals()
#############################################################
# This is the list of files that are already reprocessed:
cmd="/afs/slac.stanford.edu/u/gl/glast/datacat/prod/datacat find --mode PROD --site SLAC_XROOT --group FT2 --filter 'RunMin >=%(RunMin)s && RunMin<=%(RunMax)s' --sort nRun --s\
how-non-ok-locations /Data/Flight/Reprocess/P310 > %(p310_file_list)s" % locals()
run(cmd,test=False)
processed=extractRunNumber(p310_file_list,p310_run_list)
#wc $out_list
remaining=[]
out_run_list_file=file(p310_remaining_run_list,'w') \
for x in to_process:
if not x in processed:
if int(x)>240729801:# We skip the first two runs
remaining.append(x)
out_run_list_file.write(x+'\n')
pass
pass
out_run_list_file.close()
print 'To procrss:%d, processed:%d, remaining: %d' % (len(to_process),len(processed),len(remaining))
print 'split -l25 ../%(p310_remaining_run_list)s -a 3' % locals() |
It basically does two calls to the datacatalog. The first to retrieve the list of run to reprocess, the second to retrieve the list of run already reprocessed. Files are created to keep track of this files, and the names of the files contain the minimum and the maximum run number. |
Expand |
---|
title | Step 2: Prepare the files in "bunches of files", so that the reprocessing task (only one is running) will submit a bunch of jobs before entering in a sleep period. This is done to not kill the pipeline. |
---|
|
As the last print statement suggests, I split the list of files in files containing 25 runs. First I create 2 directories, and I cd in the todo one. For example:
Code Block |
---|
mkdir todo-2020-11/
mkdir done-2020-11/
cd todo-2020-11/ |
Then, the command I use is simply, for example: Code Block |
---|
split -l25 ../P310_Remaining_RunsList_239557414_625881605 -a 3 |
This will create a series of files containing 25 run each. |
Expand |
---|
title | Step 3: Submit the reprocessing task. This has to be a process than run in the background. Usually I open a terminal using a FASTX session, so I can go back to it any time, but I don't have to keep the connection active. |
---|
|
There is a simple file (submitter-prod-2020-11) containing the sequence of bash command I submit: Code Block |
---|
#!/bin/bash
delay=300
while true ; do
rf=$(ls todo-2020-11/* | head -1)
echo $rf
for run in $(<$rf) ; do
/afs/slac.stanford.edu/u/gl/glast/pipeline-II/prod/pipeline -m PROD createStream --stream $run --define RUNID=r0$run P310-FT2
done
mv $rf done-2020-11/.
date
sleep $delay
done
|
Note that this has to be modified every time I create a backfill (todo-2020-11/ and done-2020-11/). What this does read one file in the todo-2020-11 directory, and submit N streams of the P310-FT2 task, each task has the input run (RUNID) as argument. In our case, N=25. Then it will move the input file in the done-2020-11 directory. Then it sleeps for 5 minutes. |
...