Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

The "Über" test stand is located in the northwest corner of the Dataflow Lab in building 84 (immediately to your right if you enter through the door with the combination lock). This test stand has a VSC crate (host lat-elf1), and SIU (lat-elf2), EPU0 (lat-elf3) and EPU1 (lat-elf4). Look herefor a more detailed description of the available hardware. Ignore the information about boot flags; I've never managed to get the Über to autoboot.

In order to use the Über with LICOS you'll need to log in to two Linux machines. The first will usually be lat-fangorn or lat-hobbit1 which have access to the Online CVS repository and to a full installation of LAT flight software.

  • Online CVS — /nfs/slac/g/glast/online/cvsroot
  • Flight software  — See the value of, e.g., CMX_I_VSC after you set the instance from with cmx (see below).

The second Linux machine will usually be lat-hobbit4 or lat-hobbit5 which have access to a LICOS installation and an abbreviated installation of flight software intended for use by LICOS Python scripts.

...

    • LICOS_Scripts — Test script library
  • Flight software — /usr/local /bin, /usr/local/ext/bin, /usr/local/ext/python

...

lat-fangorn(SIU):

No Format
xyplex lat-elf2^Xelf2
^X

The SIU will probably not give any visible response to control-X but it will enter primary boot mode. It's up to you to bring it up the rest of the way.

...

The procedure is the same for EPU1, only using its window and changing the name of the target to lat-elf4.

Starting the proxies

Wiki Markup
The LICOS script engine doesn't connect directly to the VSC to get its telemetry. Instead it connects with up to three proxy servers, which in turn expose TCP ports for the script engine. A different set of ports is supposed to be used by each person so that several people can be running LICOS sessions on the same Linux machine at the same time. The set of ports is derived from the proxyPortBase parameter in the \[vsc] section of the VSC configuration file. You can get a standard VSC config file on lat-hobbit5 from $ONLINE_ROOT/LICOS_ETC/config/vsc_tb.cfg. Your personal value of proxyPortBase must be assigned by an Authority.

Wiki Markup
The proxies also dump the telemetry they receive into files. Their locations can be controlled with parameters in the \[paths] section of the VSC config file.

No Format

[paths]
# archive path is where the raw output of the proxies ends up
archive_path = /MY-WORK-AREA/scratch/stage

# ingest_path is where ISOC is expecting its Level 0 output
ingest_path  = /MY-WORK-AREA/scratch/isoc

# lsf path is where LsfWriter and analysis engine have lsf data
lsf_path = /MY-WORK-AREA/scratch/lsf

Assuming you have your VSC config file in ./vsc.cfg you can start the proxy manager. You can then use its GUI to activate the proxies individually or collectively.

The VSC allows only one client for each type of telemetry stream, but in a LICOS session several programs may have to monitor the same streams independently:

  • the script engine,
  • the Current Value Table a.k.a. the CVT (not covered here),
  • the ISOC archiver (not covered here).

(However only the script engine needs a command connection to the VSC).

The telemetry streams are therefore divided into three broad categories with a proxy server for each:

  • VSC/LAT ordinary telemetry (housekeeping),
  • VSC/LAT diagnostic,
  • Science.

The LICOS script engine et al. connect with some or all of the proxies using TCP port numbers determined by the proxyPortBase parameter in the [vsc] section of the VSC configuration file. A different set of ports is supposed to be used by each person so that several people can be running LICOS sessions on the same Linux machine at the same time. You can get a standard VSC config file on lat-hobbit5 from $ONLINE_ROOT/LICOS_ETC/config/vsc_tb.cfg.

IMPORTANT: Your personal value of proxyPortBase must be assigned by an Authority.

The proxies also dump the telemetry they receive into files. Their locations can be controlled with parameters in the [paths] section of the VSC config file.

No Format

[paths]
# archive path is where the raw output of the proxies ends up
archive_path = /YOUR-WORK-AREA/scratch/stage

# ingest_path is where ISOC is expecting its Level 0 output
ingest_path  = /YOUR-WORK-AREA/scratch/isoc

# lsf path is where LsfWriter and analysis engine have lsf data
lsf_path = /YOUR-WORK-AREA/scratch/lsf

IMPORTANT: Make sure that the source code number in the [tlmdb] section is 79. This identifies the source of any telemetry you save in the ISOC archive as coming from a test run and not from the real spacecraft.

Assuming you have your VSC config file in ./vsc.cfg you can start the proxy manager using the command below. You can then use its GUI to activate the proxies individually or collectively. You do have to start the ones your script needs telemetry from before you launch the script engine.

lat-hobbit5:

No Format

^D                       (Exit your Python session and get back to the shell.)
xterm -title Proxies -n Proxies -e ${ONLINE_ROOT}/LICOS/tools/proxy/ProxyManager.py \
    --vscip   lat-elf1 \
    --config  vsc.cfg &

Running the script engine

There is yet more configuration to do before you can run the script engine. First, you need to create a file called runId.cfg which looks like this:

No Format

[RunIdData]
runid = 0

[MachineData]
machineid = ???

The run ID is just a counter that gets incremented every time you start a run with the script engine (see below). The machine ID identifies you as the source of any telemetry that you save in the ISOC raw data archive.

IMPORTANT: You will need to get your personal machine ID from an Authority.

You will also need to make a scriptEngine.cfg file; I've attached a template to this Confluence page. The important parts are the first two sections:

No Format

[paths]
appdir = $ONLINE_ROOT/LICOS_Scripts/seApps

runiddir = /YOUR-WORK-AREA

reportdir = $ONLINE_SCRATCH/reports
logdir    = $ONLINE_SCRATCH/logs

exportdir   = $ONLINE_SCRATCH/exports
snapshotdir = $ONLINE_SCRATCH/snaps
datadir     = $ONLINE_SCRATCH/data

reposdir    = $ONLINE_ROOT/LICOS/repos

[logging]
lognbl = 0
loglevel = DEBUG
loghost = localhost

ONLINE_SCRATCH is an environment variable that you should define as /YOUR-WORK-AREA/scratch. You will need to make the directory scratch and its subdirectories reports, logs, exports, snaps, and data. The runiddir parameter tells the script engine where to look for your runId.cfg file.

In the [logging] section you will normally run with logging disabled (lognbl = 0). If you set it to one then the logging files will be written to the logs directory.

IMPORTANT: At present it seems that only one instance of the script engine at a time can run on the host machine with logging enabled, so do it only when you must, e.g., for test script V&V.

One last pair of config files: scriptOpts.cfg and your application config file. The scriptOpts file consists of sections that look like this:

No Format

[YOUR-SCRIPT'S-MAIN-CLASS-NAME]
intLimitChk=0
intLimitChkOnlyOnFail=0
askOnTestFail=0
askOnPhaseFail=0
pauseAfterPhase=0

Changing these settings goes beyond the scope of this page because they refer to the internal organization of your test application and its config file, both of which are topics you need to ask an Expert about (no, not Anne Expert of Brontosaurus Theory infamy).

At last you can launch the ScriptEngine:
lat-hobbit5:

No Format

python
No Format

xterm -title Proxies -n Proxies -e ${ONLINE_ROOT}/LICOS/toolsstart/proxy/ProxyManagerstartScriptEngine.py \
    --vscipconfig      scriptEngine.cfg \
    lat-elf1--vscConfig   vsc.cfg \
    --configscriptOpts  vscscriptOpts.cfg & \
    --server      lat-elf1