python >>> from datetime import * >>> datetime(<yyyy>, <mm>, <dd>).toordinal() - datetime(<yyyy>, 1, 1).toordinal() + 1 |
/gnfs/data/DVD/RawPackets/2006<DDD>/
to a local directory, say ~/sci
(where DDD
is the day of year from above). The date and time found in the eLog
runs section is the same as used in the .../RawArchive/*
directories. Only the *_SCI.pkt
files that span the run of interest are needed.mkdir ~/sci/lsf
glast
account on a lat-licos*
or equivalent (i.e., a machine that has LICOS_Scripts
installed):
python $ONLINE_ROOT/LICOS_Scripts/analysis/LsfWriter.py --lsd_dir=~/sci --lsf_dir=~/sci/lsf |
<ApId>-<GroundId>-<Time>-<nFirst>-<nEnd>.lsf |
GroundId
is the run number, without the 077000000, in hex. Verify that nFirst
is 0000. mysql --user elogbook elogbook --password |
update LICOS_activities set JobCompleteList = '[None]', AnalysisState_FK = 7 where RunID_FK = '077RRRRRR'; |
lsf
file(s) to "/nfs/data/lsf
/.". If the corresponding DAQ run had multiple steps, there will be multiple lsf
files.AnalysisEngine
will see them and should be able to reprocess the run.