Blog from April, 2008

Reason for change

To comply with SLAC's new database password policy we must change the database passwords every 6 months. We have switched to using Oracle wallet to make this process as painless as possible. Oracle wallet will enable us to change the oracle password without any application downtime.

We have one remaining application (data catalog crawler) which needs to be updated to use Oracle wallet. We can then change the last oracle password used by pipeline II and the data catalog. This must be done by April 30.

Test Procedure

The change to the data catalog crawler has been tested on the DEV database. We have previously tested changing passwords with oracle wallet and do not anticipate any problems. We will change the password on TEST, DEV and PROD databases (in that order).

Rollback procedure

It will be easy to rollback to the old version of the data catalog crawler should any problems occur.

Related JIRA

SSC-34@JIRA

Data Catalog Crawler version 1.2

type key summary assignee reporter priority status resolution created updated due

Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Data Catalog Client 1.1.1

type key summary assignee reporter priority status resolution created updated due

Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Reason for change

Miscellaneous small changes to improve the robustness of the tomcat servers and to improve logging to aid in diagnosing problems.

Test Procedure

These changes have all been tested in the dev server http://glast-tomcat03.slac.stanford.edu:8080/

Rollback procedure

It will be easy to rollback these changes should any problem occur.

Related JIRA

SSC-32@JIRA

Pipeline Server 1.1

type key summary assignee reporter priority status resolution created updated due

Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Reason for change

The group manager has been updated to re-enable automatic synchronization with the stanford GLAST database (broken when a new password scheme was introduced on campus) and to remove hard-wired oracle passwords. In addition this version produces a file containing a list of all glast users required by xrootd to restrict read-access to data to GLAST users. Finally extra information is copied into the SLAC copy of the database which will be useful for contacting people during operations (pager #, home phone # etc).

Test Procedure

These changes have all been tested in the dev server http://glast-tomcat03.slac.stanford.edu:8080/GroupManager

Rollback procedure

It will be easy to rollback these changes should any problem occur.

Related JIRA

SSC-23@JIRA

Pipeline Server 1.1

type key summary assignee reporter priority status resolution created updated due

Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Reason for change

There are no functional changes in this release, it only includes a few performance enhancements implemented during testing
of the new oracle database. In addition we will reconfigure the mail delivery from batch jobs to go via glast's own SMTP mail server instead of SLAC's exchange server, to reduce the load on exchange and isolate us from any exchange problems/outages.

It would be good to get these changes in before the L1 stress testing starts as they may impact performance (hopefully for the better).

Test Procedure

These changes have all been extensively tested in the TEST and DEV pipeline's.

Rollback procedure

Any of these changes can quickly and easily be backed out should unanticipated problems appear when the code is moved to production. Backing out the e-mail delivery change will need to be coordinated with Teresa Downey in SCCS, but can be done with one hour's notice.

Related JIRA

SSC-21@JIRA

Pipeline Server 1.1

type key summary assignee reporter priority status resolution created updated due

Unable to locate Jira server for this macro. It may be due to Application Link configuration.

Reason for change

Increase the disk space of the GLAST Xrootd cluster. The new server, wain017, has 32TB disk space and
adding the server will increase the total disk space of the xrootd cluster from about 75TB to 107TB.

Testing

An xrootd server has been run on wain017 and files were written to and read from the server without any problems.
Also checksumming files has been tested.

Rollback

The xrootd on wain017 could be stopped and files that were written to the server have to be copied to
the other servers. This is only possible if a small amount of data were written to wain017 as the other
glast xrootd servers have only a small amount of free space available.

CCB Jira

ssc-20@jira
ssc-38@jira
The ssc-38 jira is for wain018 - wain021 and the same procedure used for wain017 will be employed
to do add the servers to the GLAST Xrootd cluster.

Details

The procedure of adding wain007 to the xrootd cluster is:

  • Add wain017 as a read-only server.
  • Check reading from wain017 using the GLAST redirector
  • Restart the xrootd on wain017 as write-able

Wain0017 is configured the same as the other wain xrootd servers:

  • the same xrootd config file is used.
  • the same scripts for checksumming
  • the same xrootd version (20071101-0808p2)

Reason for change

Over the last few months we have noticed that our existing oracle servers are being pushed to 100% CPU utilization by the load we are putting on them, often resulting in poor performance of the pipeline server and web applications. We have purchased two new servers which will provide the following benefits:

  1. Supports 64 simultaneous threads of execution (from current 2). This will allow us to support the expected load from many people using the web interfaces at the same time as we are performing data processing.
  2. Faster and more reliable RAID 10 disks to improve IO performance
  3. Two redundant servers to provide failover in case one server fails

The new servers are running the same OS and Oracle versions as our current production setup, so we do not anticipate any compatibility problems with the new servers. We have done extensive testing of the performance and compatibility of the new servers as detailed below.

Oracle hardware details

Testing

Scalability Testing

We have performed tests to verify that we can really use all of the available threads in parallel. We see good scaling of total throughput as we add extra parallel threads.

Performed 64 units of work in 696,047ms using 1 threads
Performed 64 units of work in 348,767ms using 2 threads
Performed 64 units of work in 223,745ms using 3 threads
Performed 64 units of work in 169,165ms using 4 threads
Performed 64 units of work in 139,516ms using 5 threads
Performed 64 units of work in 118,259ms using 6 threads
Performed 64 units of work in 103,766ms using 7 threads
Performed 64 units of work in 89,952ms using 8 threads
Performed 64 units of work in 85,995ms using 9 threads
Performed 64 units of work in 77,336ms using 10 threads
Performed 64 units of work in 73,032ms using 11 threads
Performed 64 units of work in 69,325ms using 12 threads
Performed 64 units of work in 60,347ms using 13 threads
Performed 64 units of work in 61,213ms using 14 threads
Performed 64 units of work in 60,428ms using 15 threads
Performed 64 units of work in 51,031ms using 16 threads

Stress Testing

We have done extensive testing of the new database configuration using the pipeline II test server. We have run 15,000 real jobs and over 100,000 simulated jobs (when simulating jobs we did not actually submit any real batch jobs, but provided the same load to the pipeline server and database as when we are running real batch jobs). At the same time as we were running the pipeline server we also run various data ingest jobs to simulate the load of storing trending data into the database. The pipeline and trending ingest are the most database intensive activities that we perform.

We were able to run 1500 simulated MC jobs continuously for prolonged periods of time, and were able to ingest on orbits worth of trending data in a little over 1 minute.

Failover Testing

We have tested the ability to failover to the backup database if the primary database fails, and to resync the primary and secondary database. This procedure will be used if the primary database becomes inoperable for an extended period due to hardware or software failure. Currently failover is a manual operation requiring an oracle admin to designate the backup server as "primary". No change is required to GLAST software to switchover to the backup database.

Switchover methodology

We propose to switchover to the new oracle databases on Monday April 14. We will perform the following steps

Starting midnight Sunday

  1. full backup of glast-oracle01 (to NFS disk)

Starting 8am Monday

  1. Shutdown glast applications
  2. Shutdown glast-oracle01 database. <b>All glast database access will be lost at this time.</b>
  3. Restore into glast-oracle03
  4. Start glast-oracle01 as primary databse
  5. Switch glast-oracle01 to be a DNS alias for glast-oracle03
  6. Backup glast-oracle03 to NFS disk

Approximately 8pm Monday

  1. glast database access restored. <b>glast application can be restarted.</b>
  2. Restore backup to glast-oracle04
  3. Bring up glast-oracle04 as physical standby to glast-oracle03

Tuessday 8am

  1. Short DB outage to switch glast-oracle03 to max availability mode