You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The "Über" test stand is located in the northwest corner of the Dataflow Lab in building 84 (immediately to your right if you enter through the door with the combination lock). This test stand has a VSC crate (host lat-elf1), and SIU (lat-elf2), EPU0 (lat-elf3) and EPU1 (lat-elf4). Look here for a more detailed description of the available hardware.

In order to use the Über with LICOS you'll need to log in to two Linux machines. The first will usually be lat-fangorn or lat-hobbit1 which have access to the Online CVS repository and to a full installation of LAT flight software.

  • Online CVS — /nfs/slac/g/glast/online/cvsroot
  • Flight software  —

 The second Linux machine will usually be lat-hobbit4 or lat-hobbit5 which have access to a LICOS installation and an abbreviated installation of flight software intended for use by LICOS Python scripts.

  • LICOS — /usr/local/online/x where x is one of
    • LICOS — The basic framework
    • LICOS_ETC — Miscellaneous extras
    • LICOS_Config — Configuration files
    • LICOS_Scripts — Test script library
  • Flight software — /usr/local/bin, /usr/local/ext/bin, /usr/local/ext/python

For definiteness this document will assume you're using lat-fangorn and lat-hobbit5.

 Logging in

From your home machine log in to lat-fangorn and lat-hobbit5 from separate terminal windows. Use SSH and make sure that X11 forwarding is enabled.

ssh -X lat-fangorn

 

ssh -X lat-hobbit5

 

From Windows make two connections using Tera Term Pro SSH. 

Setting up the environment

 If you have a .login, .cshrc, or .tcshrc file already set up in your home directory you should make sure that for the machines discussed here they either run the commands given below or do very little or nothing. It's very important to disturb PATH and PYTHONPATH as little as possible from the canonical settings for LICOS work. For example, you must always use the Python interpreter installed in /usr/local/bin on lat-hobbit5 since it has required third-party software installed such as Qt. It's better to use .login rather than .cshrc or .tcshrc since the canonical setup scripts don't check to see whether they've already been run, so establish the environment once and then let new processes inherit it. Of course you can't inherit aliases but luckily there aren't any you really need.

 lat-fangorn:

setenv LD_LIBRARY_PATH
set interactive=${?prompt}
setenv GLASTROOT /afs/slac.stanford.edu/g/glast
source ${GLASTROOT}/flight/scripts/group.cshrc
cmx start
cmx set instance B0-x-y

where x and y are the major and minor release numbers for the LAT flight software version you need to use.

lat-hobbit5:

setenv ONLINE_PACKAGES /usr/local
setenv ONLINE_EXT /usr/local/ext
setenv ONLINE_ROOT /usr/local/online
setenv ONLINE_SCRATCH ~/projects/LICOS/LAT06x/scratch
source ${ONLINE_ROOT}/LICOS_ETC/setup/setupLICOS.csh

Creating terminal windows

Each test stand CPU that you use will print some diagnostic information to its console. In order to see it all you'll need to create an xterm window for each CPU: VSC, SIU, EPU0, and EPU1.  The following commands will create four windows nicely aligned on a single 1280x1024 pixel display. If your X11 display is a KDE or a GNOME desktop and you have the panel on the bottom you will probably want to set the panel to Autohide.

lat-fangorn:

xterm -title VSC   -n VSC  -geometry 92x35+000+000 &
xterm -title SIU   -n SIU   -geometry 92x35+580+000 &
xterm -title EPU0 -n EPU0 -geometry 92x35+000+492 &
xterm -title EPU1 -n EPU1 -geometry 92x35+580+492 &

Once again, it's important that your .cshrc or .tcshrc do little or nothing so that the xterms inherit the environment undisturbed. If you need to you can bypass your startup files like this:

/bin/tcsh -f -c 'exec xterm -title VSC  -n VSC  -geometry 92x35+000+000' &

and so on. The -f prevents tcsh from executing your startup file and the exec causes the new tcsh process to replace itself, again bypassing the usual startup file execution for new processes spawned from the shell. Doing this for every command you use will get old really fast, though.

Brining up the VSC

You'll need to reboot the Virtual Spacecraft server (lat-elf1) then load and start the VSC software. You'll connect to lat-elf1's serial port console using a command called xyplex, which is just a script wrapped around telnet. All the lat-elf machines have their serial port consoles connected to a console server device named lat-shelob (five points if you guessed it was sold by Xyplex, Inc.). Each TCP port on the server corresponds to one of the lat-elf machines; all the xyplex script does is figure out which port belongs to the given elf then use that port to connect via telnet to lat-shelob. 

lat-fangorn(VSC window):

xyplex lat-elf1
                   (You may have to type Enter to get the VxWorks prompt '-> '.)
reboot          (Control-X will also work.)
                   (Wait for VxWorks to reboot and give you the '-> '  prompt again.)
^]               (That's control-]. Wait for the telnet prompt 'telnet> '.)
quit
                  (You should be back in tcsh now.)
fmx xyplex /nfs/slac/g/glast/online/VSC/vsc.fmx --tag=mv2304 --target=lat-elf1

The fmx command will read the vsc.fmx file, write a VxWorks shell script in the current directory, connect to lat-elf1 via xyplex, and then feed VxWorks the newly-created script. The script will load a bunch of object files that make up the VSC server, then call initialization routines, and finally start the VSC main function.  The VSC window will then be left in a VxWorks shell session. The VSC server itself will be left in a state in which it can respond to simple commands but is unable to relay telemetry or do complex scheduling of requests.

Bringing up the SIU (and starting the VSC scheduler)

 You can't be sure of what state the SIU was left in by the previous user so it's best to reboot it.

lat-fangorn(SIU):

xyplex lat-elf2^X

The SIU will probably not give any visible response to control-X but it will enter primary boot mode. It's up to you to bring it up the rest of the way.

lat-hobbit5:

python         (This should be /usr/local/bin/python.)
from LICOS.scriptEngine.ScriptEngineConnector import ScriptEngineConnector
vsc = ScriptEngineConnector("lat-elf1", None, None)
vsc.start()
vsc.bootSIU(0) (Don't exit python yet.) 

The call of start() should result in a message in the VSC terminal window saying that the VSC scheduler has been started. The call to bootSIU() produce messages in the SIU window to the effect that the SIB has been found and leave it in a VxWorks shell session. Now you can bring up the SIU in much the same way as you did the VSC.

lat-fangorn(SIU):

^]
quit
fmx xyplex /nfs/slac/g/glast/online/VSC/siu.fmx --tag=rad750 --target=lat-elf2

 

At this time the SIU isn't completely up. The CPU and the SIB (with its 1553 interface) are running but the LCB isn't turned on yet. That comes next when you turn on the EPUs.

Bringing up the EPUs

Power must now be supplied to the EPUs. You do this with VSC commands generated in your Python session.

 

 

 

  • No labels