Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="1d1e8a6d88504a77-24b90527-4a55409f-a48d984a-7cfaefc08c82e27e1a7992d7"><ac:plain-text-body><![CDATA[

path

size [TB]

#files

Notes

]]></ac:plain-text-body></ac:structured-macro>

/glast/Data

785

807914

 

/glast/Data/Flight

767

792024

 

/glast/Data/Flight/Level1/

713

535635

670 TB registered in dataCat

/glast/mc

82

11343707

 

/glast/mc/ServiceChallenge/

56

6649775

 

/glast/Scratch

51

358511

~50 TB recovery possible

/glast/Data/Flight/Reprocess/

50

226757

~20 TB recovery possible

/glast/level0

13

2020329

 

/glast/bt

4

760384

 

/glast/test

2

2852

 

/glast/scratch

1

51412

 

/glast/mvmd5

0

738

 

/glast/admin

0

108

 

/glast/ASP

0

19072

 

...

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="661ae97d8c079ff7-35b4b293-429e4845-8a5cbe27-6b7b52c6975f602e1f8d5290"><ac:plain-text-body><![CDATA[

path

size [GB]

#files

Notes

]]></ac:plain-text-body></ac:structured-macro>

/glast/Data/Flight/Reprocess/P120

22734

65557

removal candidate

/glast/Data/Flight/Reprocess/P110

14541

49165

removal candidate

/glast/Data/Flight/Reprocess/P106-LEO

8690

1025

 

/glast/Data/Flight/Reprocess/P90

3319

2004

removal candidate

/glast/Data/Flight/Reprocess/CREDataReprocessing

631

22331

 

/glast/Data/Flight/Reprocess/P115-LEO

533

1440

 

/glast/Data/Flight/Reprocess/P110-LEO

524

1393

removal candidate

/glast/Data/Flight/Reprocess/P120-LEO

503

627

removal candidate

/glast/Data/Flight/Reprocess/P105

264

33465

 

/glast/Data/Flight/Reprocess/P107

212

4523

 

/glast/Data/Flight/Reprocess/P116

109

25122

 

/glast/Data/Flight/Reprocess/Pass7-repro

22

16

removal candidate

/glast/Data/Flight/Reprocess/P100

8

20089

removal candidate

...

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="8d23f5ab8b36be81-549c0b9b-44cd4f51-ade08337-9b7fe052ad2282b3bece5f54"><ac:plain-text-body><![CDATA[

path

size [GB]

#files

Notes

]]></ac:plain-text-body></ac:structured-macro>

/glast/mc/ServiceChallenge

57458

6649775

 

/glast/mc/OpsSim

24691

4494890

 

/glast/mc/DC2

1818

136847

 

/glast/mc/OctoberTest

41

29895

 

/glast/mc/XrootTest

33

18723

removal candidate

/glast/mc/Test

14

13403

removal candidate

...

 The idea is to use HPSS as a storage layer to transparently retrieve xrootd files on demand. Wilko thinks the system can push 30-50 TB/day from tape. This is comparable to the rate needed for a merit reprocessing and so is thought not to be a major impediment. In this model, we would remove the big files from disk after making 2 tape copies. They would be retrieved back into a disk buffer when needed. So we would have a relatively fixed disk buffer, and a growing tape presence.

Here are some thoughts how to implement the new storage model.
The xrootd has the be configured to automatically stage a missing file from tape to disk. This is well known and in production for BaBar. However, instead of having a client somewhat randomly staging files it will be more efficient to sort the files by tape and then stage them in that order before the client needs the files.

If the xrootd cluster is retrieving files from HPSS a policy is needed to purge files from disks. In the beginning I assume that the purging will be run manually by providing the files that should be purged (either a list of files or
a regular expression filenames will be matched against). Purging will be done before and during a processing that requires a large amount of data to be staged.

Monitoring should be setup to have a record of how much data is being staged and of the available disk space in xrootd.
A warning should be sent if the disk space will be close to exhaustion. One has to look at the total xrootd disk space as some servers will be filled and some will have space.

Currently files are backed up to single tape copies. In order to create dual tape copies a new HPSS file family has to be created. Existing files have to be re-migrated to the new file family. One should also think about if one could have a file family for just the recon files. Currently they are (might be) mixed on the same tape with all the other L1 file types which might make the retrieving less efficient.

Below are the steps that were outlined above.

  1. Deploy a new xrootd version (the current version supports only an old depreciated interface to HPSS)
  2. configure the xroot to stage files from HPSS to disk if they are missing
  3. Setup monitoring of the staging (partly comes with xrootd)
  4. Setup (I guess Nagios) to send out warnings if the xrootd disk space will be filled up
  5. Develop code to do allow purging files using file lists or regEx
  6. Setup new dual copy file family
  7. Re-migrate files to have dual copies
  8. Tools to sort files by tape and prestage them in order to increase the HPSS throughput.