Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

For a “good” set of IRMISDB entries, scroll down and see those from: 9/22 9 pm, continuing into 9/23, which completed successfully. The steps in ascending order are:

FACET PV Crawler

...

, start
FACET PV Crawler

...

finish
ALL_DATA_UPDATE

...

start
LCLS PV Crawler

...

start
LCLS PV Crawler

...

finish
CD PV Crawler

...

start
CD PV Crawler

...

finish
PV Client Crawlers

...

start
PV Client Crawlers

...

finish
PV Client Cleanup

...

start
PV Client Cleanup

...

finish
DATA_VALIDATION

...

start
DATA_VALIDATION

...

finish
FACET DATA_VALIDATION

...

start
FACET DATA_VALIDATION

...

finish
ALL_DATA_UPDATE

...

finish

then there are multiple steps for the sync to MCCO, labelled REFRESH_MCCO_IRMIS_TABLES

...

  1. Synchronization to MCCO that bypasses error checking
    If you need to run the synchronization to MCCO even though IRMISDataValidation.pl failed (i.e. the LCLS crawler ran fine, but others failed), you can run a special version that bypasses the error checking, and runs the sync no matter what. It’s:
    /afs/slac/u/cd/jrock/jrock2/DBTEST/tools/irmis/cd_script/runSync-no-check.csh

  2. Comment out code in IRMISDataValidation.pl
    If the data validation needs to bypass a step, you can edit IRMISDataValidation.pl (see above tables for location) to remove or change a data validation step and enable the crawler jobs to complete. For example, if a problem with the PV client crawlers causes the sync to MCCO not to run, you may want to simply remove the PV Client crawler check from the data validation step.

  3. Really worst case! Edit the MCCO tables manually
    If the PV crawlers will not complete with a sync of good data to MCCO, and you decide to wait til November for me to fix it (this is fine – the PV crawler parser is a complicated piece of code that needs tender loving care and testing!), AND accelerator operations are affected by missing PVs in one of these tables, the tables can be updated manually with names that are needed to operate the machine:
    • aida_names (see Bob and Greg)
    • bsa_root_names (see Elie)
    • devices_and_attributes (see Elie)

...

  • start off by testing in the SLACDEV instance!! To do this, must check the pv crawler dir out of cvs into a test directory, and modify db.properties to point to SLACDEV instance PLUS cvs checkout crawl scripts and setenv TWO_TASK to SLACDEV for set*IOCsActive scripts.
  • all directories have to be visible from each the IOC boot directory
  • the host running the crawler must be able to "see" Oracle (software and database) and the boot directory structure.
  • set up env vars in run*PVCrawler.csh and pvCrawlerSetup.csh.
  • if necessary, create crawl group in IOC table, and set*IOCsActive.csh to activate.
  • add call to pv_crawler.csh to run with the new env vars...
  • hope it works!

...

SCHEMA DIAGRAM

Image Added