...
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="59fb4ad57ce3dfe6-b4b45f9c-4a154db0-b53698dc-e2a2ac6e17535ecc58d4c51a"><ac:plain-text-body><![CDATA[ | path | size [TB] | #files | Notes | ]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/Data | 785 | 807914 |
| ||
/glast/Data/Flight | 767 | 792024 |
| ||
/glast/Data/Flight/Level1/ | 713 | 535635 | 670 TB registered in dataCat | ||
/glast/mc | 82 | 11343707 |
| ||
/glast/mc/ServiceChallenge/ | 56 | 6649775 |
| ||
/glast/Scratch | 51 | 358511 | ~50 TB recovery possible | ||
/glast/Data/Flight/Reprocess/ | 50 | 226757 | ~20 TB recovery possible | ||
/glast/level0 | 13 | 2020329 |
| ||
/glast/bt | 4 | 760384 |
| ||
/glast/test | 2 | 2852 |
| ||
/glast/scratch | 1 | 51412 |
| ||
/glast/mvmd5 | 0 | 738 |
| ||
/glast/admin | 0 | 108 |
| ||
/glast/ASP | 0 | 19072 |
|
...
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="e41a1eb21405376d-cbe2817d-4d35477e-aae48654-0d74a3e5bbd77d688acfb101"><ac:plain-text-body><![CDATA[ | path | size [GB] | #files | Notes | ]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/Data/Flight/Reprocess/P120 | 22734 | 65557 | removal candidate | ||
/glast/Data/Flight/Reprocess/P110 | 14541 | 49165 | removal candidate | ||
/glast/Data/Flight/Reprocess/P106-LEO | 8690 | 1025 |
| ||
/glast/Data/Flight/Reprocess/P90 | 3319 | 2004 | removal candidate | ||
/glast/Data/Flight/Reprocess/CREDataReprocessing | 631 | 22331 |
| ||
/glast/Data/Flight/Reprocess/P115-LEO | 533 | 1440 |
| ||
/glast/Data/Flight/Reprocess/P110-LEO | 524 | 1393 | removal candidate | ||
/glast/Data/Flight/Reprocess/P120-LEO | 503 | 627 | removal candidate | ||
/glast/Data/Flight/Reprocess/P105 | 264 | 33465 |
| ||
/glast/Data/Flight/Reprocess/P107 | 212 | 4523 |
| ||
/glast/Data/Flight/Reprocess/P116 | 109 | 25122 |
| ||
/glast/Data/Flight/Reprocess/Pass7-repro | 22 | 16 | removal candidate | ||
/glast/Data/Flight/Reprocess/P100 | 8 | 20089 | removal candidate |
...
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="5e15a3243cca7314-8c337a1b-47c8462b-9e7bb88e-f0bd921d642593a17a42fa4a"><ac:plain-text-body><![CDATA[ | path | size [GB] | #files | Notes | ]]></ac:plain-text-body></ac:structured-macro> |
---|---|---|---|---|---|
/glast/mc/ServiceChallenge | 57458 | 6649775 |
| ||
/glast/mc/OpsSim | 24691 | 4494890 |
| ||
/glast/mc/DC2 | 1818 | 136847 |
| ||
/glast/mc/OctoberTest | 41 | 29895 |
| ||
/glast/mc/XrootTest | 33 | 18723 | removal candidate | ||
/glast/mc/Test | 14 | 13403 | removal candidate |
...
Filetype: | TotSize/#Runs = | Avg file size | (%) |
---|---|---|---|
RECON: | 399257.600/13532 = | 29.505 GiB | (58.2%) |
CAL: | 100147.200/13536 = | 7.399 GiB | (14.6%) |
SVAC: | 71270.400/13535 = | 5.266 GiB | (10.4%) |
DIGI: | 61542.400/13537 = | 4.546 GiB | ( 9.0%) |
FASTMONTUPLE: | 28262.400/13537 = | 2.088 GiB | ( 4.1%) |
MERIT: | 22528.000/13537 = | 1.664 GiB | ( 3.3%) |
GCR: | 676.800/13537 = | 51.196 MiB | ( 0.1%) |
FASTMONTREND: | 432.800/13537 = | 32.739 MiB | ( 0.1%) |
DIGITREND: | 337.100/13537 = | 25.500 MiB | ( 0.0%) |
FILTEREDMERIT: | 324.900/ 7306 = | 45.538 MiB | ( 0.0%) |
MAGIC7HP: | 244.700/13315 = | 18.819 MiB | ( 0.0%) |
LS1: | 183.700/13537 = | 13.896 MiB | ( 0.0%) |
CALHIST: | 181.400/13536 = | 13.723 MiB | ( 0.0%) |
RECONTREND: | 172.700/13537 = | 13.064 MiB | ( 0.0%) |
TKRANALYSIS: | 172.400/13536 = | 13.042 MiB | ( 0.0%) |
LS3: | 156.800/13537 = | 11.861 MiB | ( 0.0%) |
RECONHIST: | 124.400/13536 = | 9.411 MiB | ( 0.0%) |
MAGIC7L1: | 55.000/13315 = | 4.230 MiB | ( 0.0%) |
FASTMONHIST: | 53.400/13535 = | 4.040 MiB | ( 0.0%) |
LS1BADGTI: | 48.200/ 7306 = | 6.756 MiB | ( 0.0%) |
CALTREND: | 45.500/13537 = | 3.442 MiB | ( 0.0%) |
FT1: | 39.100/13537 = | 2.958 MiB | ( 0.0%) |
DIGIHIST: | 32.600/13537 = | 2.466 MiB | ( 0.0%) |
etc... |
|
|
|
This option was received with a frown from CD. Of course, Oracle has thought of this and charges a lot for the replacement drives, which have special brackets. And then there is the manpower to replace and the shell game of moving the data around. We'd have to press a lot harder to get traction on this option.
Two obvious candidates are in use at the Lab now: DDN used by LCLS, and Dell el cheapo by ATLAS.
LCLS is not super thrilled by what they got, though we are told that other labs have them and are happy. CD is getting price quotes for thor-sized systems.
These are 5200 rpm & $160/TB, but low density. At the same density, 7200 rpm disks (and maybe better connectivity?) are $275/TB. This is less than half the thor costs.
The idea is to use HPSS as a storage layer to transparently retrieve xrootd files on demand. Wilko thinks the system can push 30-50 TB/day from tape. This is comparable to the rate needed for a merit reprocessing and so is thought not to be a major impediment. In this model, we would remove the big files from disk after making 2 tape copies. They would be retrieved back into a disk buffer when needed. So we would have a relatively fixed disk buffer, and a growing tape presence.
...