Confluence will be unusable 23-July-2024 at 06:00 due to a Crowd upgrade.
...
Include Page |
---|
...
|
...
|
...
Table of Contents |
---|
...
Include Page |
---|
...
|
...
|
Twice a day a cron job runs on each xrootd data server and obtains the disk usage of that server. These values are written to files in nfs and a trscrontab job creates the webpage with the current
free diskspace
The cronjob, running in crontab on each xrootd data server, is
/opt/xrootd/admin/mon_diskspace.sh
It collects the total and free diskspcae as well as the inode usage for ufs file systems (zfs has no inode restrictions).
The disk usage is stored in files:
.bq /nfs/farm/g/glast/u15/xrootd/diskspace/df_server_YYYYMM
where server is the server name and YYYYMM is the year and month the values were collected. For example
.bq /nfs/farm/g/glast/u15/xrootd/diskspace/df_wain020_200806
contains all disk usage values for wain021 for Juni, 2008.
Each line in these files shows the disk usage for a particular date. The format is:
Wiki Markup |
---|
bd. DF <date> <server> <totalSpace> <freeSpace> <%Used> \[<inodesFree> <%inodesFree>\] |
...
Two xrootd setup are used for Fermi. The system-test xrootd consists of a single data server that provides access to some nfs and test data directories. The main Fermi xrootd cluster contains multiple PB of disk space and holds all of Fermi's data. It is used fro reading and writing files. There are three entry points to this cluster. The production redirector is used by users and production. The test redirector is used to test new xrootd versions and configuration. It runs on the same data servers as the production xrootd. A proxy server is also available that allows writing files from remote sides (e.g.: IN2P3) to SLAC.
type | host | alias | subcluster | admin | comment |
---|---|---|---|---|---|
system tests | fermilnx-v07 | glast-xrootd01 | taylor | ||
redirector (production) | fermilnx-v02, fermilnx-v12 | glast-rdr | fermilnx-v01, fermilnx-v03 | taylor | |
test redirector | fermilnx-v03, fermilnx-v06 | glast-test-rdr | fermilnx-v03, fermilnx-v06 | ansible | xrootd name: gltst, gltg (for subcluster) |
proxy | fermilnx-v06 | glast-xrd-xfer | taylor |
For the Fermi servers see: NFS/GPFS and Xroot Disk Assets
The production xrootd cluster consists of a set of Solaris servers (wainNNN), Linux servers with a local files system (fermi-xrdNNN) and xrootd servers that access a GPFS file system. The current setup for the production xrootd use only one of the gpfs servers connecting directly to the redirector. As shown in the figure each fermi-gpfs server has access to the whole GPFS space so that all see the same files. In this case if all fermi-gpfs servers would connected to the redirector files on the GPFS file system would show up as multiple copies.
In order to handle the shared file system a subcluster was introduced in is currently used by the test xrootd.The subcluster redirector is aware that the xrootd data server export a shared file system. The setup is shown in the figure below. Clients still connect first the the main redirector (glast-test-rdr). If a file is on GPFS the client will be first redirected to the subcluster redirector which subsequently redirects it to one of the gpfs data server.
Gliffy Diagram | ||||||
---|---|---|---|---|---|---|
|
Access to the Fermi Xrootd cluster requires the users authentication. The authentication and authorization is based on the users name and uses the xrootd unix-authentication module. The authorization information, which directory path a user can read from and write to, is kept in a file that a xrootd server reads and periodically checks for updates.
The authorization file is created in two steps: