Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • all IRMIS tables in MCCQA. All IRMIS UIs query data on MCCQA: PV and IOC data, and PV Client data.
  • 3 production tables on MCCO, which are used by other system – AIDA, BSA applications

Oracle schemas and accounts

The IRMIS database schema is installed in 4 SLAC Oracle instances:

  • MCCQA IRMISDB schema: Production for all gui’s and applications. Contains data populated nightly by perl crawler scripts from production IOC configuration files. Data validation is done following each load.
  • MCCO IRMISDB schema: contains production data, currently for 3 tables only: bsa_root_names, devices_and_attributes, curr_pvs. MCCQA data is copied to MCCO once it has been validated. So MCCO is as close as possible to pristine data at all times.
  • SLACDEV contains the 3 schemas which are used for developing, testing and staging new features before release to production on MCCQA (see accounts below)
    1. IRMISDB – sandbox for all kinds of development, data sifting, etc. Not refreshed from prod, or not very often.
    2. IRMISDB_TEST – testing for application implementation. Refreshed from prod at the start of a development project.
    3. IRMISDB_STAGE – staging for testing completed applications against recently-refreshed production data.
  • SLACPROD IRMISDB schema: this is obsolete, but will be kept around for awhile (6 months?) as a starting point in case the DB migration goes awry somehow.

Other Oracle accounts

  • IRMIS_RO – read-only account (not used much yet – but available)
  • IOC_MGMT – created for earlier IOC info project with a member of the EPICS group which is not active at the moment- new ioc info work is being done using IRMISDB

...

As of September 15, all crawler-related shell scripts and perl scripts use the getPwd script (Greg White) to get the latest Oracle password. Oracle passwords must be changed every 6 months; new passwords will be given to Ken Brobeck to update the secure master password files at password change time.**

Note
titleNote

The IRMIS GUI and the JSP application still use hardcoded passwords*. These must be changed “manually” at every password change cycle.

Database structure

:see schema diagram below. (this diagram excludes the EPICS camdmp structure, which is documented separately here: <url will be supplied>)

Crawler scripts

The PV crawler is run once for each IOC boot directory structure. The LCLS PV crawler (runLCLSPVCrawlerLx.bash) runs the crawler only once. It is separate to enable it to be run on a different schedule and different host which can see the LCLS IOC directories. Also, the crawler code has been modified to be LCLS-specific; it is a different version than the SLAC PV crawler.

The SLAC PV crawler (runSLACPVCrawler.csh) runs the crawler a couple of times to accommodate the various CD IOC directory structures.

cron jobs

  • LCLS side: laci on lcls-daemon2: runLCLSPVcrawlerLx.bash: crawls LCLS PVs and creates lcls-specific tables (bsa_root_names, devices_and_attributes), copies LCLS client config files to dir where CD client crawlers can see them.
  • LCLS side: laci on lcls-daemon2: caget4curr_ioc_device.bash: does cagets to populate curr_ioc_devices for the IOC Info APEX app. Run separately from the crawlers because cagets can hang unexpectedly – they are best done in an isolated script!
  • CD side: cddev on slcs2: runAllCDCrawlers.csh: runs CD PV crawler and all client crawlers, data validation, and sync to MCCO.
  • FACET side: flaci on facet-daemon2: runFACETPVcrawlerLx.bash: crawls FACET PVs, copies FACET client config files to dir where CD client crawlers can see them.

PV crawler operation summary

For the location of the crawler scripts, see Source code directories below.
Basic steps as called by cron scripts are:

...

* The PV client crawlers load all client directories in their config files; currently includes both CD and LCLS.

LOGFILES, Oracle audit table

Log filenames are created by appending a timestamp to the root name shown in the tables below.

...

Wiki Markup
{table:border=1}
{tr}
{th}script name{th}
{th}descr{th}
{tr}
{td}{color:#0000ff}runAllCDCrawlers.csh{color}\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}runs
* SLAC PV crawler
* all client crawlers
* rec client cleanup
* data validation for all current crawl data (LCLS and CD)
* sync to MCCO
\\
Logfile: /nfs/slac/g/cd/log/irmis/pv/ CDCrawlerAll.log
{td}
{tr}
{td}{color:#0000ff}runSLACPVCrawler.csh{color}
\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}run by runAllCDCrawlers.csh: crawls NLCTA IOCs (previously handled PEPII IOCs)
\\
The PV crawler is run 4 times within this script to accommodate the various boot directory structures:
|| system || IOCs crawled || log files ||
| NLCTA | IOCs booted from $CD_IOC on gtw00. \\
Boot dirs mirrored to /nfs/mccfs0/u1/pepii/mirror/cd/ioc for crawling | /nfs/slac/g/cd/log/irmis/pv/pv_crawlerLOG_ALL.\* \\
/tmp/pvCrawler.log |
| TR01 | TR01 only (old boot structure) | /nfs/slac/g/cd/log/irmis/pv/pv_crawlerLOG_TR01.\* \\
/tmp/pvCrawler.log |
{td}
{tr}
{tr}
{td}{color:#008000}runLCLSPVCrawlerLx.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}
{td}crawls LCLS IOCs
|| system || IOCs crawled || log files ||
| LCLS | LCLS PRODUCTION IOCs. all IOC dirs in \\
$IOC /usr/local/lcls/epics/iocCommon | /u1/lcls/tools/crawler/pv/pv_crawlerLOG_LCLS.timestamp \\
/tmp/ LCLSPVCrawl.log |
{td}
{tr}
{tr}
{td}{color:#800080}runFACETPVCrawlerLx.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}crawls FACET IOCs
|| system || IOCs crawled || log files ||
| FACET | FACET PRODUCTION IOCs. all IOC dirs in facet $IOC /usr/local/facet/epics/iocCommon | /u1/facettools/crawler/pv/pv_crawlerLOG_LCLS.timestamp \\
/tmp/FACETPVCrawl.log |
{td}
{tr}
{tr}
{td}{color:#0000ff}runClientCrawlers.csh{color}
\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_utils{_}{color}
\\
\\
{color:#0000ff}individual client crawler scripts are here{color}
\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}
Runs PV client crawlers in sequence. *LCLS client config files are all scp-ed to /nfs/slac/g/cd/crawler/lcls \*Configs for crawling*
|| Client crawler || config file with list of directories/files crawled || log files ||
| ALH | /afs/slac/g/cd/tools/irmis/cd_config/SLACALHDirs | /nfs/slac/g/cd/log/irmis/alh/alh_crawlerLOG.\* \\
/tmp/pvCrawler.log |
| Channel Watcher (CW) | /afs/slac/g/cd/tools/irmis/cd_config/SLACCWdirs.lst | /nfs/slac/g/cd/log/irmis/cw/cw_crawlerLOG.\* \\
/tmp/pvCrawler.log |
| Channel Archiver (CAR) | /afs/slac/g/cd/tools/irmis/cd_config/SLACCARfiles.lst \\
note: all CAR files for LCLS are scp-ed over from lcls-archsrv \\
to /nfs/slac/g/cd/crawler/lclsCARConfigs for crawling | /nfs/slac/g/cd/log/irmis/car/car_crawlerLOG.\* \\
/tmp/pvCrawler.log |
*Also runs* load_vuri_rec_client_type.pl for clients that don't handle vuri_rec_client_type records (sequence crawler only, at the moment)
{td}
{tr}
{tr}
{td}{color:#0000ff}runRecClientCleanup.csh{color}
\\
{color:#0000ff}{_}in in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}deletes all non-current rec client, rec client flag and vuri rows. Logs to /nfs/slac/g/cd/log/irmis/client_cleanupLOG.\*{td}
{tr}
{tr}
{td}{color:#008000}run_find_devices.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}
{td}for LCLS and FACET only, populate the devices_and_attributes table, a list of device names and attributes based on the LCLS PV naming convention. For PV DEV:AREA:UNIT:ATTRIBUTE, DEV:AREA:UNIT is the device, ATTRIBUTE is the attribute.{td}
{tr}

{tr}\\
{td}{color:#008000}run_load_bsa_root_names.bash{color}\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}{td}
loads bsa_root_names table by running stored procedure LOAD_BSA_ROOT_NAMES. LCLS and FACET names.{td}
{tr}
{tr}
{td}{color:#008000}ioc_report.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#800080}ioc_report-facet.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}runs at the end of the LCLS PV crawl, which is last, creates the web ioc report:
\\
[http://www.slac.stanford.edu/grp/cd/soft/database/reports/ioc_report.html]{td}
{tr}
{tr}
{td}{color:#0000ff}updateMaterializedViews.csh{color}
\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}refreshes materialized view from curr_pvs{td}
{tr}
{tr}
{td}{color:#008000}findDupePVs.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#800080}findDupePVs-all.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}finds duplicate PVs for reporting to e-mail
\\
(the \--all version takes system as a parameter){td}
{tr}
{tr}
{td}{color:#008000}copyClientConfigs.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#800080}copyClientConfigs-facet.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}copies alh, cw and car config files to /nfs for crawling by the client crawler job{td}
{tr}
{tr}
{td}{color:#008000}copyClientConfigs.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#800080}copyClientConfigs-facet.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}copies alh, cw and car config files to /nfs for crawling by the client crawler job{td}
{tr}
{tr}
{td}{color:#008000}refresh_curr_ioc_device.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}
{td}refreshes the curr_ioc_device table with currently booted info from ioc_device (speeds up query in the Archiver PV APEX app).
\\
Also see caget4curr_ioc_device.bash below.{td}
{tr}
{tr}
{td}{color:#008000}find_LCLSpv_count_changes.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#800080}find_pv_count_changes-all.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}finds the IOCs with PV counts that changed in the last LCLS crawler run. This is reported in the logfile, and in the daily e-mail.
\\
(the \--all version takes system as a parameter){td}
{tr}
{tr}
{td}{color:#008000}populate_dtyp_io_tab.bash{color}
\\
{color:#008000}populate_io_curr_pvs_and_fields.bash{color}
\\
{color:#008000}run_load_hw_dev_pvs.bash{color}
\\
{color:#008000}run_parse_camac_io.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}
{td}set of script to populate various tables for the EPICS camdmp APEX application.{td}
{tr}
{tr}
{td}{color:#008000}write_data_validation_row.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}
{color:#0000ff}write_data_validation_row.csh{color}
\\
{color:#0000ff}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}
{color:#800080}write_data_validation_row-facet.bash{color}
\\
{color:#800080}{_}in /usr/local/facet/tools/irmis/script/_{color}{td}
{td}writes a row to the controls_global.data_validation_audit table (all environments){td}
{tr}
{tr}
{td}runDataValidation.csh
\\
runDataValidation-facet.csh
\\
_in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{td}
{td}runs IRMISDataValidation.pl -- CD and LCLS PV data validation{td}
{tr}
{tr}
{td}{color:#b00e00}runSync.csh{color}
\\
{color:#b00e00}{_}in /afs/slac/g/cd/soft/tools/irmis/cd_script{_}{color}{td}
{td}runs data replication to MCCO, currently these objects ONLY:
* curr_pvs
* bsa_root_names
* devices_and_attributes{td}
{tr}
{tr}
{td}{color:#008000}caget4curr_ioc_device.bash{color}
\\
{color:#008000}{_}in /usr/local/lcls/tools/irmis/script/_{color}{td}
{td}does cagets to obtain IOC parameters for the curr_ioc_device table. Run as a separate cron job from the crawlers because cagets
\\
can hang unexpectedly -- they are best done in an isolated script\!{td}
{tr}
{table}

#top

Crawler cron schedule and error reporting

time

script

cron owner/host

8 pm

runFACETPVCrawlerLx.bash

flaci/facet-daemon1

9 pm

runLCLSPVCrawlerLx.bash

laci/lcls-daemon2

1:30 am

runAllCDCrawlers.csh

cddev/slcs2

4 am

caget4curr_ioc_device.bash

laci/lcls-daemon2

...

Anchor
sourcecodedirectories
sourcecodedirectories

Source directories

description

cvs root

production directory tree root

details

IRMIS software

SLAC code has diverged from the original collaboration version, and LCLS IRMIS code has diverged from the main SLAC code (i.e. we have 2 different version of the IRMIS PV crawler.)

CD: /afs/slac/package/epics/tools/irmisV2_SLAC
http://www.slac.stanford.edu/cgi-wrap/cvsweb/tools/irmisV2_SLAC/?cvsroot=SLAC-EPICS-Releases

LCLS:
Please note different CVS location: the production directory for the LCLS crawlers is /usr/local/lcls/package/irmis/irmisV2_SLAC/. the LCLS "package" dir structure is NOT in CVSed. So, to modify crawler code, you can cvs co the code, modify, test, copy the mods straight into production, then CVS them.
CVS location is in tools/irmis/crawler_code_CVS
http://www.slac.stanford.edu/cgi-wrap/cvsweb/tools/irmis/crawler_code_CVS/?cvsroot=LCLS

CD:
/afs/slac/package/epics/tools/irmisV2_SLAC
LCLS:
/usr/local/lcls/package/irmis/irmisV2_SLAC


  • db/src/crawlers contains crawler source code. SLAC-specific crawlers are in directories named *SLAC
  • apps/src contains UI source code.
  • apps/build.xml is the ant build file
  • README shows how to build the UI app using ant

CD scripts

For ease and clarity, the CD scripts are also in the LCLS CVS repository under
tools/irmis/cd_script, cd_utils, cd_config
http://www.slac.stanford.edu/cgi-wrap/cvsweb/tools/irmis/crawler_code_CVS/?cvsroot=LCLS

/afs/slac/g/cd/tools/irmis


  • cd_script contains most scripts
  • cd_utils has a few scripts
  • cd_config contains client crawler directory lists

LCLS scripts

/afs/slac/g/lcls/cvs
http://www.slac.stanford.edu/cgi-wrap/cvsweb/tools/irmis/?cvsroot=LCLS

/usr/local/lcls/tools/irmis


  • script contains the LCLS pv crawler run scripts, etc.
  • util has some subsidiary scripts

FACET scripts

These scripts share the LCLS repository (different names so they don’t collide with LCLS scripts)
http://www.slac.stanford.edu/cgi-wrap/cvsweb/tools/irmis/crawler_code_CVS/?cvsroot=LCLS

/usr/local/facet/tools/irmis


  • script contains the FACET pv crawler run scripts, etc.
  • util has some subsidiary scripts

#top

PV Crawler: what gets crawled and how

  • runSLACPVCrawler.csh, runLCLSPVCrawlerLx.bash and runFACETPVCrawlerLx.bash set up for and run the IRMIS pv crawler multiple times to hit all the boot structures and crawl groups. Environment variables set in pvCrawlerSetup.bash, pvCrawlerSetup-facet.bash and pvCrawlerSetup.csh, point the crawler to IOC boot directories, log directories, etc. Throughout operation, errors and warnings are written to log files.
  • The IOC table in the IRMIS schema contains the master list of IOCs. The SYSTEM column designates which crawler group the IOC belongs to.
  • An IOC will be hit by the pv crawler if its ACTIVE column is 1 in the IOC table: ACTIVE is set to 0 and 1 by the crawler scripts, depending on SYSTEM, to control what is crawled. i.e. LCLS IOCs are set to active by the LCLS crawler job. NLCTA IOCs are set to active by the CD crawler job.
  • An IOC will not be crawled unless its LCLS/FACET STARTTOD or CD TIMEOF BOOT (boot time) PV has changed since the last crawl.
  • Crawler results will not be saved to the DB unless at least one file mod date or size has changed.
  • Crawling specific files can be triggered by changing the date (e.g. touch).
  • In the pv_crawler.pl script, there’s a mechanism for forcing the crawling of subsets of IOCs (see the code for details)
  • IOCs without a STARTTOD/TIMEOFBOOT PV will be crawled every time. (altho PV data is only written when IOC .db files have changed)

#top

TROUBLESHOOTING: post-crawl emails and data_validation_audit entries: what to worry about, how to bypass or fix for a few…

Info
titlePlease Note

Once a problem is corrected, the next night’s successful crawler jobs will fix up the data.

...

To bypass crawling specific IOC(s): update the IOC table and for the IOC(s) in question, set SYSTEM to something other than LCLS or NLCTA (e.g. LCLS-TEMP-NOCRAWL) so it/they will not be crawled next time.

#top

In the event of an error that stops crawling, here are the affected end-users of IRMIS PVs:

(usually not an emergency, just decay of the accuracy of the lists over time.)

  • CURR_PVS: IRMIS PVs create a daily current PV list, a view called curr_pvs. curr_pvs supplies lcls pv names to one of the AIDA names load jobs (LCLS EPICS names). The current PV and IOC lists are also queried by the IRMIS gui, and by web interfaces, and joined with data in lcls_infrastructure by Elie and co. for Greg and co.
  • BSA_ROOT_NAMES: Qualifying IRMIS PVs populate the bsa_root_names table, which is joined in with Elie's complex views to device data in lcls_infrastructure. For details on the bsa_root_names load, please have a look at the code for the stored procedure which loads it (see one of the tables above)
  • DEVICES_AND_ATTRIBUTES:* PV names are parsed into device names to populate the devices_and_attributes table which is used by lcls_infrastructure and associated processing.
  • User interfaces may be affected (but they use MCCQA, so are not affected by failure of the sync to MCCO step):
    • IRMIS gui
    • the web IOC Report
    • IRMIS iocInfo web and APEX apps
    • EPICS camdmp APEX app
    • Archiver PV search APEX app
    • ad hoc querying

#top

3 possible worst case workarounds:

  1. Synchronization to MCCO that bypasses error checking
    • If you need to run the synchronization to MCCO even though IRMISDataValidation.pl failed (i.e. the LCLS crawler ran fine, but others failed), you can run a special version that bypasses the error checking, and runs the sync no matter what. It’s:
      /afs/slac/u/cd/jrock/jrock2/DBTEST/tools/irmis/cd_script/runSync-no-check.csh

  2. Comment out code in IRMISDataValidation.pl
    • If the data validation needs to bypass a step, you can edit IRMISDataValidation.pl (see above tables for location) to remove or change a data validation step and enable the crawler jobs to complete. For example, if a problem with the PV client crawlers causes the sync to MCCO not to run, you may want to simply remove the PV Client crawler check from the data validation step.

  3. Really worst case! Edit the MCCO tables manually
    • If the PV crawlers will not complete with a sync of good data to MCCO, and you decide to wait til November for me to fix it (this is fine – the PV crawler parser is a complicated piece of code that needs tender loving care and testing!), AND accelerator operations are affected by missing PVs in one of these tables, the tables can be updated manually with names that are needed to operate the machine:
      • aida_names (see Bob and Greg)
      • bsa_root_names (see Elie)
      • devices_and_attributes (see Elie)

#top

...

for the future: crawling a brand new directory structure

  • start off by testing in the SLACDEV instance!! To do this, must check the pv crawler dir out of cvs into a test directory, and modify db.properties to point to SLACDEV instance PLUS cvs checkout crawl scripts and setenv TWO_TASK to SLACDEV for set*IOCsActive scripts.
  • all directories have to be visible from each the IOC boot directory
  • the host running the crawler must be able to "see" Oracle (software and database) and the boot directory structure.
  • set up env vars in run*PVCrawler.csh and pvCrawlerSetup.csh.
  • if necessary, create crawl group in IOC table, and set*IOCsActive.csh to activate.
  • add call to pv_crawler.csh to run with the new env vars...
  • hope it works!

...