You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Online Monitoring GUI

Using our GUI

More info here

Write a plug-in to our GUI

More info here

Write their own application (reads from shared memory)
Online Monitoring, and Simulation Using Files

The online monitoring system will interface the DAQ system to the
monitoring software through a shared memory interface. The two sides
communicate with each other via POSIX message queues. The DAQ system
will fill the shared memory buffers with events which are also known
as transitions. The DAQ system notifies the monitoring software that
each buffer is ready by placing a message in the monitor output queue
containing the index of the newly available event buffer.

When the monitoring software is finished with a shared memory buffer
it releases the buffer by returning the message via its output queue.
The DAQ monitoring system does not store events. If no shared memory
buffers are available, then any events that go past the monitoring
station during that time will not be monitored.

To facilitate the development and testing of monitoring software, SLAC
has developed a file server that mimics the DAQ system. Instead of
live data from the DAQ system, it reads a file and presents it to the
monitoring software via the shared memory interface the same way that
the DAQ system does. One difference is that the file server will not
drop events if the monitoring software takes too long. It will wait
indefinitely for a free buffer for the next event in the file.

To use the example files you will need two shells. The executables are in
release/build/pdsdata/bin/i386-linux-dbg. Assuming both shells are in
the executable directory, first start the server with something like:

./xtcmonserver -f ../../../../opal1k.xtc -n 4 -s 0x700000 -r 120 -p yourname -c 1 [-l]

This points the software at the xtc example file, specifying four
message buffers and the size of each message buffer. The -r option
specifies the rate that events will be sent in cps, limited only by the I/O
and CPU speeds of the platform you are running on. The last parameter shown
is a "partition tag" string that is added the name of the message queue, to
allow multiple people to use this mechanism without a "name collision".
If only one person is using a machine, then just supply a placeholder name.

If there is more than one user on the computer you are using, you
can use the partition tag parameter to resolve conflicts. If you use
your login name as the partition tag, then you will be guaranteed not
to collide with anyone else.

The "-c" option tells the server how many clients it should allow to connect
at the same time. If you are using it alone, you can enter a 1.

The optional argument, "-l", will cause the server to loop infinitely,
repetitively supplying all the events in the file.

The buffers referred to above are in the shared memory. If you
don't need the shared memory to add any latency tolerance then
you can use a smaller number. Using only one buffer will serialize
the operation because only one side will be able to read or
write at a time. The minimum number of buffers used should be two. The
buffer size must be large enough for the largest event that will be handled.

The server will load the shared memory buffers with the first
events in the file and then wait for the client to start reading the events.

Once the server is started, you can start the client side in the
second shell with:

./xtcmonclientexample -p yourname -c clientId

The first parameter that must be given to the client software is the
partition tag. This must exactly match the partition tag
given to the server side software.

The parameter given with the "-c" option is the client ID. If you are
the only user, then just give it a value of zero.

Once running the server will keep supplying the client with events
until it runs through the file. It will then exit, unless the optional
"-l" command line parameter was given. If the looping option is given
on the command line, then the server will endlessly repeat the sequence
of L1Accept (Laser shot) events in the file and will not terminate
until the process is killed.

Sample output from running the above can be found here.

To write your own monitoring software, all you have to do is subclass the
XtcMonitorClient class and override the XtcMonitorClient::processDgram()
method to supply your own client side processing of the DataGram events
in the XTC stream or file supplied by the file server. Below is the example
implemented by XtcMonClientExample.cc.

class MyXtcMonitorClient : public XtcMonitorClient {
  public:
    virtual int processDgram(Dgram* dg) {
      printf("%s transition: time 0x%x/0x%x, payloadSize 0x%x\n",TransitionId::name(dg->seq.service()),
             dg->seq.stamp().fiducials(),dg->seq.stamp().ticks(),dg->xtc.sizeofPayload());
      myLevelIter iter(&(dg->xtc),0);
      iter.iterate();
      return 0;
    };
};

The returned integer controls whether the method will be called again
with more data. If the method returns a non zero value, then it
will not be called again. This allows the client to end the interaction
cleanly if it chooses to do so.

This method is the callback from the client monitoring when it receives
a buffer in the shared memory. The example above uses an XtcIterator
subclass to drill down into the data and label it. You can provide your
own functionality there instead.

Write their own application, offline analysis style (reads from a file)

More info here

XTC playback

More info here

  • No labels