Search/Navigation:
Related:
SLAC/EPP/HPS Public
Jefferson Lab/Hall B/HPS Run Wiki
S30XL-LESA/LDMX
HPS uses a conditions database which is accessible through the DatabaseConditionsManager.
The URL, username and password for connecting to the database can be specified from the command line as Java properties.
The default connection information corresponds to the following command line options:
java -Dorg.hps.conditions.url=jdbc:mysql://hpsdb.jlab.org:3306/hps_conditions \ -Dorg.hps.conditions.user=hpsuser \ -Dorg.hps.conditions.password=darkphoton [...]
These options should be provided immediately after the java command as they are global Java properties and not command line options provided to a specific application or program within hps-java.
The default database connection uses a read-only replica of the primary MySQL conditions database at Jefferson Lab. Therefore, when connecting from a computer which is outside of the jlab.org domain, you will not be able to make any changes to this database. If you need to insert records into the database, i.e. for new calibrations, then it must be done behind the JLab firewall. You must provide proper credentials that will allow writing to the database (they are not given here!).
Using a Local Conditions Database
Support for SQLite is included in the database conditions manager for running jobs locally without an internet connection.
A db file may be obtained by using the following commands:
https://github.com/JeffersonLab/hps-conditions-backup/raw/master/hps_conditions.db.tar.gz tar -zxvf hps_conditions.db.tar.gz
The local db file can be used by using this option when running Java:
java -Dorg.hps.conditions.url=jdbc:sqlite:hps_conditions.db [...]
No username or password is required when connecting locally in this way.
In order to create a local SQLite database, you will need to create a snapshot of the MySQL database and convert it to a SQLite db file.
This converter script can be used to produce the db file.
It can be downloaded using these commands:
wget https://raw.githubusercontent.com/dumblob/mysql2sqlite/master/mysql2sqlite chmod +x mysql2sqlite
You can create a dump of the current conditions database using this command:
mysqldump --skip-extended-insert --compact -u hpsuser --password=darkphoton -h hpsdb.jlab.org --lock-tables=false hps_conditions > hps_conditions.mysql
Now, you can load it into SQLite as follows:
mysql2sqlite hps_conditions.mysql | sqlite3 hps_conditions.db
You should have an up to date copy of the master conditions database locally, now.
The conditions database can be backed up using a command similar to the following:
mysqldump -h hpsdb.jlab.org -ujeremym -pXXXXXX hps_conditions &> hps_conditions.sql
... where 'jeremym' is replaced by your account name and 'XXXXXX' with your password.
To load the database from a backup, the following command would be used.
mysql -h hpsdb.jlab.org -u jeremym -pXXXXXX hps_conditions < hps_conditions.sql
The above command is listed for reference only and you should not try to execute it.
Fully restoring the database from a backup would need to go through a JLAB CCPR, as the accounts we have access to do not have all the proper permissions for doing this.
The conditions configuration is typically done via arguments to command line programs.
The detector and run number to be used can be provided to the job manager to override these settings from input files.
java -jar hps-distribution-bin.jar -d detector_name -R 5772 [args]
Configuration of the EvioToLcio utility is similar.
java -cp hps-distribution-bin.jar org.hps.evio.EvioToLcio -d detector_name -R 5772 [args]
Providing conditions in this way will cause the manager to automatically "freeze" after it initializes so that run numbers and detector header information from the input file will be ignored in the job.
Additionally, tags can be specified to filter out the available conditions records in the job, which is described in the Detector Conditions Tags documentation.
The HPS conditions manager is automatically installed by creating a new instance of the DatabaseConditionsManager class.
new DatabaseConditionsManager();
This will automatically install the manager as the global conditions manager, which can then be accessed using the following method.
DatabaseConditionsManager mgr = DatabaseConditionsManager.getInstance();
Users should not normally need to install their own conditions manager as this is done in the setup of the various job tools, but it may be necessary when writing standalone scripts and command line tools which do not use these classes.
DatabaseConditionsManager.getInstance().setDetector("detector_name", 5772);
HPS adds several features to the lcsim conditions system.
You can add one or more tags for filtering the conditions records. Only those records belonging to the tag will be accessible.
DatabaseConditionsManager.getInstance().addTag("pass0");
The conditions system can be "frozen" after it is initialized, meaning that subsequent calls to set a new detector and run number will be completely ignored.
DatabaseConditionsManager.getInstance().freeze();
This is useful to force the system to load a specific configuration by run number if the actual event data does not have the same run number (or for run 0 events from simulation).
The conditions system will be initialized for you automatically or configured using switches to the various HPS Java command line tools (exact syntax depends on the tool).
Initializing the Conditions System
Since the conditions system uses a global state, meaning there is one setup for the whole process, re-initializing the system from inside your reconstruction job is not at all a good idea!
Conditions information is accessed in the beginning of the job through the Driver class's detectorChanged method.
public void detectorChanged(Detector detector) { DatabaseConditionsManager conditionsManager = DatabaseConditionsManager.getInstance(); EcalChannelCollection channels = conditionsManager.getCachedConditions(EcalChannelCollection.class, "ecal_channels").getCachedData(); System.out.println("got " + channels.size() + " ECal channels"); }
All conditions collections required by a Driver should be loaded in this method to avoid incurring a performance overhead by reading the conditions on every event.
You can also access collections not associated to the current run by providing the collection ID.
DatabaseConditionsManager conditionsManager = DatabaseConditionsManager.getInstance(); EcalGainCollection gains = new EcalGainCollection(); gains.setConnection(conditionsManager.getConnection()); gains.setTableMetaData(conditionsManager.findTableMetaData("ecal_gains"); gains.select(1234); /* where number is a valid collection ID in the database */
This can be used to retrieve reference data that is not accessible in the conditions for the run.
Java Object Class | Java Collection Class | Default Database Table | Description |
---|---|---|---|
BeamEnergy | BeamEnergyCollection | beam_energies | nominal beam energies |
EcalBadChannel | EcalBadChannelCollection | ecal_bad_channels | ECal bad channel list |
EcalCalibration | EcalCalibrationCollection | ecal_calibrations | per channel ECal pedestals and noise |
EcalChannel | EcalChannelCollection | ecal_channels | ECal channel information including map of DAQ to physical channels |
EcalGain | EcalGainCollection | ecal_gains | per channel ECal gains |
EcalLed | EcalLedCollection | ecal_leds | per crystal LED configuration |
EcalLedCalibration | EcalLedCalibrationCollection | ecal_led_calibrations | per crystal LED calibration information (from calibration run) |
EcalPulseWidth | EcalPulseWidthCollection | ecal_pulse_widths | ECal signal pulse width (currently unused in recon) |
EcalTimeShift | EcalTimeShiftCollection | ecal_time_shifts | ECal signal time shift (currently unused in recon) |
SvtAlignmentConstant | SvtAlignmentConstantCollection | svt_alignment_constants | SVT alignment constants in Millepede format may be disabled using -DdisableSvtAlignmentConstants |
SvtBadChannel | SvtBadChannelCollection | svt_bad_channels | SVT bad channel list |
SvtBiasConstant | SvtBiasConstantCollection | svt_bias_constants | SVT bias setting for a time range |
SvtCalibration | SvtCalibrationCollection | svt_calibrations | per channel SVT noise and pedestal measurements |
SvtChannel | SvtChannelCollection | svt_channels | SVT channel information |
SvtDaqMapping | SvtDaqMappingCollection | svt_daq_map | SVT mapping of DAQ to physical channels |
SvtGain | SvtGainCollection | svt_gains | per channel SVT gains |
SvtMotorPosition | SvtMotorPositionCollection | svt_motor_positions | SVT motor position in mm |
SvtShapeFitParameters | SvtShapeFitParametersCollection | svt_shape_fit_parameters | SVT parameters for the signal fit |
svt_t0_shifts | SVT T0 (first sample) shifts | ||
svt_timing_constants | SVT timing configuration constants including offset and phase | ||
test_run_svt_channels | test run SVT channel information | ||
test_run_svt_daq_map | test run SVT DAQ mapping | ||
test_run_svt_t0_shifts | test run SVT T0 shift |
Data Tables
Each type of condition has an associated database table which contains records with conditions information plus a few additional pieces of information. These tables are modeled by the ConditionsObjectCollection class.
mysql> describe beam_energies; +---------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | collection_id | int(11) | NO | | NULL | | | beam_energy | double | NO | | NULL | | +---------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec)
The id is the row ID used to uniquely identify the record. The collection_id associates a set of records together into a collection. Every data table has these two fields plus additional columns with the conditions data.
The conditions table associates collections with a run number range.
mysql> describe conditions; +---------------+--------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+-------------------+-----------------------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | run_start | int(11) | NO | | NULL | | | run_end | int(11) | NO | | NULL | | | updated | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | | created | datetime | NO | | NULL | | | tag | varchar(256) | YES | | NULL | | | created_by | varchar(255) | YES | | NULL | | | notes | longtext | YES | | NULL | | | name | varchar(40) | NO | | NULL | | | table_name | varchar(50) | NO | | NULL | | | collection_id | int(11) | NO | | NULL | | +---------------+--------------+------+-----+-------------------+-----------------------------+ 11 rows in set (0.01 sec)
The run_start and run_end give a range of run numbers for which the conditions are valid. These can be the same number to specify a single run.
The table_name gives the name of the table containing the conditions data.
The collection_id gives the collection ID to load from the table.
This table is modeled by the ConditionsRecord class which is accessible via the DatabaseConditionsManager.
When multiple collections of the same type are valid for the current run, the most recently added one will be used by default.
New conditions classes should follow a basic template which provides information about its associated database tables and columns.
For example, here is the definition for the BeamEnergy condition.
@Table(names = {"beam_energies"}) public final class BeamEnergy extends BaseConditionsObject { public static final class BeamEnergyCollection extends BaseConditionsObjectCollection<BeamEnergy> { } @Field(names = {"beam_energy"}) public Double getBeamEnergy() { return this.getFieldValue("beam_energy"); } }
The @Table annotation on the class maps the class to its possible database tables. Typically, this is a single value.
The @Field annotation is applied to a method which should be mapped to a column in the database. The method must be public.
An optional @Converter annotation can be used to override the default conversion from the database.
@Converter(converter = ConditionsRecordConverter.class)
This is only used in a few special cases.