Oracle has pulled a Scrooge on us, voiding our November order for 320 TB of Thors - before they were discontinued. We didn't make it in time. Now we need to explore options and plans for our disk storage. Also to get us through this particular crunch - we have some 55 TB free at the time of writing, and owe 128 TB to KIPAC and BABAR!
Possible elements of the plan:
We have 1.05 PB of space in xrootd now. Some 640 TB of that is taken up by L1, and 90% of that space is occupied by recon, svac and cal tuples.
[added by Tom]
Data current as of 12/8/2010.
Current xroot holdings:
Total Space |
1043 TB |
35 servers (wain006 removed) |
xrootd overhead |
21 TB |
(disk 'buffer' space) |
Available Space |
1022 TB |
|
Space used |
966.778 TB |
95% |
Space free |
55.580 TB |
5% |
Level 1 |
783 GB/day |
(averaged over period since 4 Aug 2008) |
Therefore, with current holdings and usage, there is sufficient space for 71 days of Level 1 data (running out ~17 Feb 2011)
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="531b8ec5-c641-4667-ba7e-cc32a27d1823"><ac:plain-text-body><![CDATA[ |
path |
size [TB] |
#files |
Notes |
]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/Data |
785 |
807914 |
|
||
/glast/Data/Flight |
767 |
792024 |
|
||
/glast/Data/Flight/Level1/ |
713 |
535635 |
670 TB registered in dataCat |
||
/glast/mc |
82 |
11343707 |
|
||
/glast/mc/ServiceChallenge/ |
56 |
6649775 |
|
||
/glast/Scratch |
51 |
358511 |
~50 TB recovery possible |
||
/glast/Data/Flight/Reprocess/ |
50 |
226757 |
~20 TB recovery possible |
||
/glast/level0 |
13 |
2020329 |
|
||
/glast/bt |
4 |
760384 |
|
||
/glast/test |
2 |
2852 |
|
||
/glast/scratch |
1 |
51412 |
|
||
/glast/mvmd5 |
0 |
738 |
|
||
/glast/admin |
0 |
108 |
|
||
/glast/ASP |
0 |
19072 |
|
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="c5743806-8abc-466a-9b93-daa21e98496b"><ac:plain-text-body><![CDATA[ |
path |
size [GB] |
#files |
Notes |
]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/Data/Flight/Reprocess/P120 |
22734 |
65557 |
removal candidate |
||
/glast/Data/Flight/Reprocess/P110 |
14541 |
49165 |
removal candidate |
||
/glast/Data/Flight/Reprocess/P106-LEO |
8690 |
1025 |
|
||
/glast/Data/Flight/Reprocess/P90 |
3319 |
2004 |
removal candidate |
||
/glast/Data/Flight/Reprocess/CREDataReprocessing |
631 |
22331 |
|
||
/glast/Data/Flight/Reprocess/P115-LEO |
533 |
1440 |
|
||
/glast/Data/Flight/Reprocess/P110-LEO |
524 |
1393 |
removal candidate |
||
/glast/Data/Flight/Reprocess/P120-LEO |
503 |
627 |
removal candidate |
||
/glast/Data/Flight/Reprocess/P105 |
264 |
33465 |
|
||
/glast/Data/Flight/Reprocess/P107 |
212 |
4523 |
|
||
/glast/Data/Flight/Reprocess/P116 |
109 |
25122 |
|
||
/glast/Data/Flight/Reprocess/Pass7-repro |
22 |
16 |
removal candidate |
||
/glast/Data/Flight/Reprocess/P100 |
8 |
20089 |
removal candidate |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="37a7c683-d7ee-43a0-a391-ababcd9a75c9"><ac:plain-text-body><![CDATA[ |
path |
size [GB] |
#files |
Notes |
]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/mc/ServiceChallenge |
57458 |
6649775 |
|
||
/glast/mc/OpsSim |
24691 |
4494890 |
|
||
/glast/mc/DC2 |
1818 |
136847 |
|
||
/glast/mc/OctoberTest |
41 |
29895 |
|
||
/glast/mc/XrootTest |
33 |
18723 |
removal candidate |
||
/glast/mc/Test |
14 |
13403 |
removal candidate |
Filetype: |
TotSize/#Runs = |
Avg file size |
(%) |
---|---|---|---|
RECON: |
399257.600/13532 = |
29.505 GiB |
(58.2%) |
CAL: |
100147.200/13536 = |
7.399 GiB |
(14.6%) |
SVAC: |
71270.400/13535 = |
5.266 GiB |
(10.4%) |
DIGI: |
61542.400/13537 = |
4.546 GiB |
( 9.0%) |
FASTMONTUPLE: |
28262.400/13537 = |
2.088 GiB |
( 4.1%) |
MERIT: |
22528.000/13537 = |
1.664 GiB |
( 3.3%) |
GCR: |
676.800/13537 = |
51.196 MiB |
( 0.1%) |
FASTMONTREND: |
432.800/13537 = |
32.739 MiB |
( 0.1%) |
DIGITREND: |
337.100/13537 = |
25.500 MiB |
( 0.0%) |
FILTEREDMERIT: |
324.900/ 7306 = |
45.538 MiB |
( 0.0%) |
MAGIC7HP: |
244.700/13315 = |
18.819 MiB |
( 0.0%) |
LS1: |
183.700/13537 = |
13.896 MiB |
( 0.0%) |
CALHIST: |
181.400/13536 = |
13.723 MiB |
( 0.0%) |
RECONTREND: |
172.700/13537 = |
13.064 MiB |
( 0.0%) |
TKRANALYSIS: |
172.400/13536 = |
13.042 MiB |
( 0.0%) |
LS3: |
156.800/13537 = |
11.861 MiB |
( 0.0%) |
RECONHIST: |
124.400/13536 = |
9.411 MiB |
( 0.0%) |
MAGIC7L1: |
55.000/13315 = |
4.230 MiB |
( 0.0%) |
FASTMONHIST: |
53.400/13535 = |
4.040 MiB |
( 0.0%) |
LS1BADGTI: |
48.200/ 7306 = |
6.756 MiB |
( 0.0%) |
CALTREND: |
45.500/13537 = |
3.442 MiB |
( 0.0%) |
FT1: |
39.100/13537 = |
2.958 MiB |
( 0.0%) |
DIGIHIST: |
32.600/13537 = |
2.466 MiB |
( 0.0%) |
etc... |
|
|
|
Two obvious candidates are in use at the Lab now: DDN by LCLS, and Dell el cheapo by ATLAS.
These are $160/TB, but low density.
The idea is to use HPSS as a storage layer to transparently retrieve xrootd files on demand. Wilko thinks the system can push 30-50 TB/day from tape. This is comparable to the rate needed for a merit reprocessing and so is thought not to be a major impediment. In this model, we would remove the big files from disk after making 2 tape copies. They would be retrieved back into a disk buffer when needed. So we would have a relatively fixed disk buffer, and a growing tape presence.