The entry table used to be part of a separate database on a completely separate database server.  However, with consolidation and changes to the SLAC computing infrastructure it was decided to move the table to be part of the rd_releasemgr databases even though it is used by processes from the LSF and workflow portions of the RM as well.

The entry table is the log for the RM processes.  All of the RM software write to this table to record messages and output that track the flow of the various processes.  The data in this table are kept for 5 days before being deleted.  Older data is removed to prevent the database from growing too large.  Currently, there are about 2 million rows of log data in the database at any give time.  This can increase when a large number of builds go through simultaneously. 

Note:  I have considered increasing the time logs are kept as the 5 day limit sometimes causes problems.  The limit was imposed back when we were building much more often and there was also a lot more diagnostic (debugging) output being generated from when the software was developed by Navid.  At that time, the typical 5 day row count would be on the order of 17-20 million lines - nearly an order of magnitude larger than what we have today.  The database is completely capable of handling the larger table size.  If we needed longer look back in logs, increasing the keep time to 10 or 15 days is not unreasonable.

The columns in the entry table are:

Notes on looking up data in the table

  1. The tuple (ts,ordering) is used as an index on this table.
  2. If you want your data to be displayed in proper time order, be sure to include the ORDER BY ts, ordering statement at the end of the SQL command.  If not, the ordering of the output is not guaranteed and could be a little confusing.  It typically comeback properly ordered by ts but not by ordering.
  3. To speed up queries, it is useful to include a filter of ts > "<date time>" in the WHERE clause of the SQL command, especially if you know the approximate time of the log entries you're looking for.  As ts in an index this allows the system to optimize the query and not have to search over the entire dataset.

As an example query, to find the complete log for a given build I typically run the following SQL command:

SELECT * FROM entry WHERE ts > '<known start time>' AND subId= <buildID from build table> ORDER BY ts, ordering

This generates 2-4 thousand lines of log entries for a typical build.