Search/Navigation:
Related:
SLAC/EPP/HPS Public
Jefferson Lab/Hall B/HPS Run Wiki
S30XL-LESA/LDMX
Checkout and build hps-java using:
mkdir /scratch && cd /scratch git clone https://github.com/JeffersonLab/hps-java && cd hps-java && git checkout online-recon-dev mvn clean install -DskipTests cd online-recon # This will install the online recon run scripts to your scratch dir... mvn install -DskipTests -DskipCheckstyle -DassembleDirectory=/scratch # Optionally add the script dir to the path. export PATH=/scratch/bin:$PATH cd /scratch/
The online reconstruction run scripts should now be installed to the /scratch/bin dir.
The server cannot startup without an accessible and running ET ring present.
The important ET connection parameters for this can be specified in an initial server config file like:
et.buffer=/tmp/ETBuffer et.host=localhost et.port=11111
These are the hostname where the ET ring is running, the port it is using for connections, and the name of the buffer file. They need to match either the actual running ET ring in Hall B or one started by the user on their machine.
You can launch the server now using a command like::
hps-recon-server --host localhost --port 22222 -w $PWD/stations -c my.cfg &
The --host switch should specify a valid network hostname or IP address for the server to use.
The --port switch can be used to set an alternate port for the TCP/IP socket to listen on (by default it is actually 22222).
For testing and development, it is preferable to use the generic localhost but this is not provided as a default. Otherwise, this should be a valid system (DNS?) name, or leave it blank to use the actual system name.
Instead of writing to your terminal, the server will create a log file at logs/server.log which you can tail to check the server's log messages using a command like tail -f logs/server.log to see the server logs from handling client requests.
You can leave out the --host to have the server and client use the actual system name (usually equivalent to the result of the uname command on Linux).
Now you can connect to the running server using the online recon client:
hps-recon-client --host localhost -c station.prop
An initial configuration can be provided to the client using the -c switch.
You can create an initial file called station.prop which will usually have settings specific to the data format/year like detector name, run number, reconstruction steering file:
lcsim.detector=HPS-PhysicsRun2016-Pass2 lcsim.run=7798 lcsim.steering=/org/hps/steering/recon/PhysicsRun2016OnlineRecon.lcsim
These settings are usually going to be specific to the year of the HPS data being read from the ET ring. The above are settings based on the 2016 physics reconstruction.
The client can also be used without a config file (the server may have been started with one to provide valid defaults).
If you run the client without any command options, it will opens the interactive online reconstruction console which can be used to configure, create, start/stop and remove stations that run the HPS physics event reconstruction on data from the ET ring.
hps-recon-client --host localhost
Whenever you see online> it means the command is run in the online reconstruction console, not the system shell (like bash).
Type help into the console to show the documentation for the client command line interface:
online>help
This is the output of the above command:
GENERAL help - print general information help [cmd] - print information for specific command exit - quit the console SETTINGS port [port] - set the server port host [host]- set the server hostname file [filename] - write server output to a file append [true|false] - true to append to output file or false to overwrite terminal - redirect server output back to the terminal COMMANDS config - Set new server configuration properties create - Create a new station list - List station information in JSON format log - Tail log file of station (hit any key to stop tailing) remove - Remove a station that is inactive save - Save the current set of plots to a ROOT or AIDA file set - Set a configuration property shutdown - Shutdown the server start - Start a station that is inactive status - Show server and station status stop - Stop a station
You can set the connection settings to the server using the port and host commands, though the defaults should usually work fine.
If the server was started with a custom port or it is running with a particular hostname, this can be specified like:
online> port 12345 online> host myactualhostname
By default, output from the server read by the client is printed to the console, but it can be saved to a log file using commands such as:
online> file client.log online> append true
Configuration of the online reconstruction stations can be set using:
# lcsim detector name for conditions online>set lcsim.detector HPS-PhysicsRun2016-Pass2 # run number for conditions online>set lcsim.run 7798 # lcsim steering resource online>set lcsim.steering /org/hps/steering/recon/PhysicsRun2016OnlineRecon.lcsim online>set lcsim.detector HPS-PhysicsRun2016-Pass2
FIXME: This example is very specific to 2016 data.
The steering file should contain a Driver that extends the RemoteAidaDriver, in order to provide station plot data to the server.
A single station with the above configuration can be created using:
online>create 1
The argument is the number of stations you want the server to create, which can potentially scale reliably to around the number of cores on your machine.
online>create 8
You will need to test how many stations you can run simultaneously given the specific configuration being used and the number of plots being created/updated.
Stations that have been created but not started are in the inactive state. Stations which are inactive have no running system process and no registration in the plot aggregation engine that adds plots from different stations, so their plots will not be shown in the remote tree.
Inactive stations can be started using a command like:
online>start
The start command can also take a list of station IDs to start:
online>start 1 2 3 ...
Many of the online recon commands are similar in that they take no arguments (usually meaning all stations or all stations in a certain state) or a space-delimited list of station IDs as in the above example.
Once a station is started, it will process events from the ET ring or sit in wait mode waiting for more data to appear.
The status of the stations and the ET ring can be checked using:
online>status
Log data can be streamed from each station using its station ID as an argument:
online>log 1
When streaming log data from one of the stations you can hit any key on the keyboard to exit.
Stop all jobs:
online>stop
Stop stations by ID:
online>stop 1 2 3
Remove all inactive stations (active stations need to be stopped first before they are removable).
online>remove
Remove stations by ID:
online>remove 1 2 3
Save some plots to an AIDA file:
online>save myplots.aida
JAS3 can be used to view AIDA plots.
Save some plots to a ROOT file:
online>save myplots.root
You can use rootbrowse command to look at the plots saved in the ROOT format.
When you'll all done, shutdown the server using a command like:
online>shutdown 5
This will wait 5 seconds before stopping and destroying all stations and cleanly shutting down the server and its connection to the ET ring.
hps-recon-client --host localhost my.cmd
First install Tomcat and start it (not covered here). I used the 9.0.41 version.
Download and build the webapp from github:
git clone https://github.com/JeremyMcCormick/hps-online-recon-webapp cd hps-online-recon-webapp mvn clean install war:war -DskipTests
Now go to the Tomcat manager webpage, usually at:
http://localhost:8080/manager/html
Click Choose File in the manager and select the war file at hps-online-recon-webapp/target/HPSRecon.jar
Click Deploy to start the webapp. If there is a previous installation of the war file, then first click on Undeploy under Applications to remove it, or this step will fail.
Should there be trouble with the application, error messages can be found in the Tomcat log files at $TOMCAT_DIR/logs
The connection to the remote AIDA server is defined in hps-online-recon-webapp/src/main/webapp/WEB-INF/web.xml
The default value is:
<context-param> <param-name>hps.online.recon.connection</param-name> <param-value>//localhost:3001/HPSRecon</param-value> </context-param>
Should the AIDA server's settings be changed, this MUST be updated and the webapp redeployed.
To force a reload of cached AIDA data (e.g. at the start of a new run), click Reload on the webapp in the admin panel:
Anyone can view the combined plots in a webpage, provided the aggregator is running and the URL is accessible.
Go to the webpage to view the plots, which would typically be:
http://localhost:8080/HPSRecon/
If there is an error, probably the online reconstruction server is not running.
The plots should show in the frame on the left of the window:
They are organized by folders (directories) in the aida tree.
Click on one of the plots to show it:
The remote AIDA tree can be viewed in real-time using JAS3 as a client.
Installation of JAS3 is not covered. You need to install the "Remote AIDA Plugin" from the Plugin Manager as well as the standard AIDA plugin (no other plugins should be required).
Once installed and restarted (ignore any errors from the plugin manager), go to Tools > Remote AIDA > Connect and enter the correct settings for your remote AIDA tree:
These are the defaults but this may differ depending on the server's configuration.
Plots should refresh automatically (every 2s by default)
Click on the combined or remote plots in the tree and style them interactively...
TODO: Run a Java main() or Jython script to create AIDA plot pages/regions/styles/etc.
If you are running outside Hall B for testing purposes, the ET project is required for creating a local ET ring and using a producer to stream EVIO files onto it. (This is not needed for actual data runs in Hall B.)
# download and untar sources https://coda.jlab.org/drupal/system/files/et-16.1.tgz tar -zxvf et-16.1.tgz # fix up build files (for some reason this is needed even with Python 2?) cd et-16.1.GIT 2to3-2.7 -w ./coda.py ./SConstruct autopep8 -i SConstruct autopep8 -i coda.py # build it export CODA=`pwd` scons install # setup the environment (need to do this everytime you run) export LD_LIBRARY_PATH=${CODA}/Linux-x86_64/lib export PATH=${CODA}/Linux-x86_64/bin:${CODA}/Linux-x86_64/bin/examples:$PATH
The scons command may not be present in your environment (installation not covered here).
FIXME: The ET installation requires some hacking of the build files. (Checkin to git fork?)
If you are testing and not taking data from a run then you need to start your own ET server to provide EVIO events, using a command like:
et_start -rb 8000000 -sb 8000000 -nd -p 11111 -f /tmp/ETBuffer -s 20000 -n 1000 -v -d
This starts the ET ring on port 11111 and uses a "standard" location for the swap file (buffer).
The -s argument specifics the max event size.
The -n argument is how many events can be present in the ET ring at once (???).
The other settings are mainly to try and improve the network performance of the tool by setting generous read/write buffers and using the no delay flag.
Experiment with the settings that seem to work best for your machine.
To do anything useful with the ET ring, we need to stream HPS event data onto it, which can be done using a command like:
hps-recon-producer -p 11111 -h localhost -e 1 -f /tmp/ETBuffer -l ./evio_files.txt -s 20000 -d 10
The text file contains a list of EVIO files that should be compatible with the settings on the client/server (e.g. 2016 data for the example settings from above).
These settings need to match those of the ET start from above or you will not be able to connect.
TODO: Experiment with using broadcast mode to pickup any running ET rings.
x
For previous runs, the Monitoring Application has been used to run the reconstruction off of the ET ring and produce plots in a custom Java GUI.
This tool has some serious limitations such as:
With these limitations in mind, a new systems was developed around components such as a server, client, and reconstruction (ET) stations. Instead of all the components being run in the same system process, each reconstruction station is given its own system process and dedicated station on the ET ring. The stations run in headless mode (not displaying graphics) and provide real-time plot data using a remote AIDA tree which can be connected to and browsed/displayed as the data is streaming from the ET ring over an RMI connection. The server manages the aggregation of plot and performance data from each reconstruction station. A simple event bus was developed to replace the loop-based system in the old monitoring app. A client provides a command line and console interface for creating, starting, monitoring, stopping and removing stations managed by the server. Commands are sent to the server as JSON using a TCP/IP socket connection, and the server may send back a JSON response or stream data back to the client (e.g. for streaming station log data).
Histograms are booked and filled within the station's Driver code. The plots are stored in an AIDA tree which is remotely accessible (read only). The server mounts these station trees into its remote tree and performs aggregation of histograms, clouds, and profiles into a combined tree. One to three dimensional histograms are supported, depending on the type (profiles have no 3D type). The combined histogram data can be saved to ROOT or AIDA files.
Display clients such as JAS3 or a Java webapp can connect to the server's remote tree and view both the remote (station) plots and the combined plots in real time.