The "Über" test stand is located in the northwest corner of the Dataflow Lab in building 84 (immediately to your right if you enter through the door with the combination lock). This test stand has a VSC crate (host lat-elf1), and SIU (lat-elf2), EPU0 (lat-elf3) and EPU1 (lat-elf4). Look here for a more detailed description of the available hardware. Ignore the information about boot flags; I've never managed to get the Über to autoboot.
In order to use the Über with LICOS you'll need to log in to two Linux machines. The first will usually be lat-fangorn or lat-hobbit1 which have access to the Online CVS repository and to a full installation of LAT flight software.
- Online CVS — /nfs/slac/g/glast/online/cvsroot
- Flight software — See the value of, e.g., CMX_I_VSC after you set the instance with cmx (see below).
The second Linux machine will usually be lat-hobbit4 or lat-hobbit5 which have access to a LICOS installation and an abbreviated installation of flight software intended for use by LICOS Python scripts.
- LICOS — /usr/local/online/x where x is one of
- LICOS — The basic framework
-
- LICOS_ETC — Miscellaneous extras
-
- LICOS_Config — Configuration files
-
- LICOS_Scripts — Test script library
- Flight software — /usr/local/ext/bin, /usr/local/ext/python
For definiteness this document assume that you're using lat-fangorn and lat-hobbit5.
Logging in
From your home machine log in to lat-fangorn and lat-hobbit5 from separate terminal windows. Use SSH and make sure that X11 forwarding is enabled.
ssh -X lat-fangorn
ssh -X lat-hobbit5
From Windows make two connections using Tera Term Pro SSH.
Setting up the environment
If you have a .login, .cshrc, or .tcshrc file already set up in your home directory you should make sure that for the machines discussed here they either run the commands given below or do very little or nothing. It's very important to disturb PATH and PYTHONPATH as little as possible from the standard settings for LICOS work. For example, you must always use the Python interpreter installed in /usr/local/bin on lat-hobbit5 since it has required third-party software installed such as Qt. It's better to use .login rather than .cshrc or .tcshrc since the standard setup scripts don't check to see whether they've already been run, so establish the environment once and then let new processes inherit it. Of course you can't inherit aliases but luckily there aren't any you really need.
lat-fangorn:
setenv LD_LIBRARY_PATH set interactive=${?prompt} setenv GLASTROOT /afs/slac.stanford.edu/g/glast source ${GLASTROOT}/flight/scripts/group.cshrc cmx start cmx set instance B0-x-y
where x and y are the major and minor release numbers for the LAT flight software version you need to use.
lat-hobbit5:
setenv ONLINE_PACKAGES /usr/local setenv ONLINE_EXT /usr/local/ext setenv ONLINE_ROOT /usr/local/online source ${ONLINE_ROOT}/LICOS_ETC/setup/setupLICOS.csh
Creating terminal windows
Each test stand CPU that you use will print some diagnostic information to its console. In order to see it all you'll need to create an xterm window for each CPU: VSC, SIU, EPU0, and EPU1. The following commands will create four windows nicely aligned on a single 1280x1024 pixel display. If your X11 display is a KDE or a GNOME desktop and you have the panel on the bottom you will probably want to set the panel to Autohide.
lat-fangorn:
xterm -title VSC -n VSC -geometry 92x35+000+000 & xterm -title SIU -n SIU -geometry 92x35+580+000 & xterm -title EPU0 -n EPU0 -geometry 92x35+000+492 & xterm -title EPU1 -n EPU1 -geometry 92x35+580+492 &
Once again, it's important that your .cshrc or .tcshrc do little or nothing so that the xterms inherit the environment undisturbed. If you need to you can bypass your startup files like this:
/bin/tcsh -f -c 'exec xterm -title VSC -n VSC -geometry 92x35+000+000' &
and so on. The -f prevents tcsh from executing your startup file and the exec causes the new tcsh process to replace itself, again bypassing the usual startup file execution for new processes spawned from the shell. Doing this for every command you use will get old really fast, though.
Bringing up the VSC
You'll need to reboot the Virtual Spacecraft server (lat-elf1) then load and start the VSC software. You'll connect to lat-elf1's serial port console using a command called xyplex, which is just a script wrapped around telnet. All the lat-elf machines have their serial port consoles connected to a console server device named lat-shelob (five points if you guessed it was sold by Xyplex, Inc.). Each TCP port on the server corresponds to one of the lat-elf machines; all the xyplex script does is figure out which port belongs to the given elf then use that port to connect via telnet to lat-shelob.
lat-fangorn(VSC window):
xyplex lat-elf1 (You may have to type Enter to get the VxWorks prompt '-> '.) reboot (Control-X will also work.) (Wait for VxWorks to reboot and give you the '-> ' prompt again.) ^] (That's control-]. Wait for the telnet prompt 'telnet> '.) quit (You should be back in tcsh now.) fmx xyplex /nfs/slac/g/glast/online/VSC/vsc.fmx --tag=mv2304 --target=lat-elf1
The fmx command will read the vsc.fmx file, write a VxWorks shell script in the current directory, connect to lat-elf1 via xyplex, and then feed VxWorks the newly-created script. The script will load a bunch of object files that make up the VSC server, then call initialization routines, and finally start the VSC main function. The VSC window will then be left in a VxWorks shell session. The VSC server itself will be left in a state in which it can respond to simple commands but is unable to relay telemetry or do complex scheduling of requests.
Bringing up the SIU (and starting the VSC scheduler)
You can't be sure of what state the SIU was left in by the previous user so it's best to reboot it.
lat-fangorn(SIU):
xyplex lat-elf2^X
The SIU will probably not give any visible response to control-X but it will enter primary boot mode. It's up to you to bring it up the rest of the way.
lat-hobbit5:
python (This should be /usr/local/bin/python.) from LICOS.scriptEngine.ScriptEngineConnector import ScriptEngineConnector vsc = ScriptEngineConnector("lat-elf1", None, None) vsc.start() vsc.bootSIU(0) (Don't exit python yet.)
The call of start() should result in a message in the VSC terminal window saying that the VSC scheduler has been started. The call to bootSIU() produce messages in the SIU window to the effect that the SIB has been found and leave it in a VxWorks shell session. Now you can bring up the SIU in much the same way as you did the VSC.
lat-fangorn(SIU):
^] quit fmx xyplex /nfs/slac/g/glast/online/VSC/siu.fmx --tag=rad750 --target=lat-elf2
At this time the SIU isn't completely up. The CPU and the SIB (with its 1553 interface) are running but the LCB isn't turned on yet. That comes next when you turn on the EPUs.
Bringing up the EPUs
Power must now be supplied to the EPUs. You do this with VSC commands generated in your Python session.
lat-hobbit5:
vsc.mainFeedOn(siuId=1, pduId=1) vsc.ssr(1) vsc.powerOnEpuOnly(epuId=0, pduId=1) vsc.powerOnEpuOnly(epuId=1, pduId=1) (Wait 30 seconds) vsc.bootEPU(0) vsc.bootEPU(1)
The call to mainFeedOn() should produce a message in the SIU window saying that the LCB is working again. Notice that siuId is one, not zero, since this function uses the physical CPU ID rather than the logical ID used by all the other commands you've issued. Enabling the Solid State Recorder seems to be required for the EPUs to work properly even if you will not be generating science data.
In theory both EPUs can be booted at once. If it doesn't seem to work try rearranging the power-on and boot commands to do them one at a time. Don't forget to wait between power-on and boot commands. If you still have trouble try turning on the power, connecting to an EPU with xyplex, typing control-X, then issuing the boot command.
Once you can get a VxWorks prompt on EPU0:
lat-fangorn(EPU0):
^] quit fmx xyplex /nfs/slac/g/glast/online/VSC/epu.fmx --tag=rad750 --target=lat-elf3
The procedure is the same for EPU1, only using its window and changing the name of the target to lat-elf4.
Starting the proxies
The LICOS script engine doesn't connect directly to the VSC to get its telemetry. Instead it connects with up to three proxy servers, which in turn expose TCP ports for the script engine. A different set of ports is supposed to be used by each person so that several people can be running LICOS sessions on the same Linux machine at the same time. The set of ports is derived from the proxyPortBase parameter in the [vsc] section of the VSC configuration file. You can get a standard VSC config file on lat-hobbit5 from $ONLINE_ROOT/LICOS_ETC/config/vsc_tb.cfg. Your personal value of proxyPortBase must be assigned by an Authority.
The proxies also dump the telemetry they receive into files. Their locations can be controlled with parameters in the [paths] section of the VSC config file.
[paths] # archive path is where the raw output of the proxies ends up archive_path = /YOUR-WORK-AREA/scratch/stage # ingest_path is where ISOC is expecting its Level 0 output ingest_path = /YOUR-WORK-AREA/scratch/isoc # lsf path is where LsfWriter and analysis engine have lsf data lsf_path = /YOUR-WORK-AREA/scratch/lsf
IMPORTANT: Make sure that the source code number in the [tlmdb] section is 79. This identifies the source of any telemetry you save in the ISOC archive as coming from a test run and not from the real spacecraft.
Assuming you have your VSC config file in ./vsc.cfg you can start the proxy manager using the command below. You can then use its GUI to activate the proxies individually or collectively.
lat-hobbit5:
^D (Exit your Python session and get back to the shell.) xterm -title Proxies -n Proxies -e ${ONLINE_ROOT}/LICOS/tools/proxy/ProxyManager.py \ --vscip lat-elf1 \ --config vsc.cfg &
Running the script engine
There is yet more configuration to do before you can run the script engine. First, you need to create a file called runId.cfg which looks like this:
[RunIdData] runid = 0 [MachineData] machineid = ???
The run ID is just a counter that gets incremented every time you start a run with the script engine (see below). The machine ID is another number you'll have to get from an Authority; it identifies you as the source of any telemetry that you save in the ISOC raw data archive.
You will also need to make a scriptEngine.cfg file; I've attached a template to this Confluence page. The important parts are the first two sections:
[paths] appdir = $ONLINE_ROOT/LICOS_Scripts/seApps runiddir = /YOUR-WORK-AREA reportdir = $ONLINE_SCRATCH/reports logdir = $ONLINE_SCRATCH/logs exportdir = $ONLINE_SCRATCH/exports snapshotdir = $ONLINE_SCRATCH/snaps datadir = $ONLINE_SCRATCH/data reposdir = $ONLINE_ROOT/LICOS/repos [logging] lognbl = 0 loglevel = DEBUG loghost = localhost
ONLINE_SCRATCH is an environment variable that you should define as /YOUR-WORK-AREA/scratch. You will need to make the directory scratch and its subdirectories reports, logs, exports, snaps, and data. The runiddir parameter tells the script engine where to look for your runId.cfg file.
In the [logging] section you will normally run with logging disabled (lognbl = 0). If you set it to one then the logging files will be written to the logs directory.
IMPORTANT: At present this seems to have the side effect that only one instance of the script engine at a time can run on the host machine, so do it only when you must, e.g., test script V&V.