You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Oracle has pulled a Scrooge on us, voiding our November order for 320 TB of Thors - before they were discontinued. We didn't make it in time. Now we need to explore options and plans for our disk storage. Also to get us through this particular crunch - we have some 55 TB free at the time of writing, and owe 128 TB to KIPAC and BABAR!

Possible elements of the plan:

  • clean up existing space
  • explore replacing disks in existing fileservers with 2 TB ones
  • find a new vendor(s)
  • change storage model

Clean up

We have 1.05 PB of space in xrootd now. Some 640 TB of that is taken up by L1, and 90% of that space is occupied by recon, svac and cal tuples.


[added by Tom]
Data current as of 12/8/2010.
Current xroot holdings:

Total Space

1043 TB

35 servers (wain006 removed)

xrootd overhead

21 TB

(disk 'buffer' space)

Available Space

1022 TB

 

Current usage:

Space used

966.778 TB

95%

Space free

55.580 TB

5%

Consumption rate:

Level 1

783 GB/day

(averaged over period since 4 Aug 2008)

Therefore, with current holdings and usage, there is sufficient space for 71 days of Level 1 data (running out ~17 Feb 2011)

Usage distribution:

<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="dc40c371-e945-4f7e-b55e-33d3604f4c5b"><ac:plain-text-body><![CDATA[

path

size [TB]

#files

Notes

]]></ac:plain-text-body></ac:structured-macro>

/glast/Data

785

807914

 

/glast/Data/Flight

767

792024

 

/glast/Data/Flight/Level1/

713

535635

670 TB registered in dataCat

/glast/mc

82

11343707

 

/glast/mc/ServiceChallenge/

56

6649775

 

/glast/Scratch

51

358511

~50 TB recovery possible

/glast/Data/Flight/Reprocess/

50

226757

~20 TB recovery possible

/glast/level0

13

2020329

 

/glast/bt

4

760384

 

/glast/test

2

2852

 

/glast/scratch

1

51412

 

/glast/mvmd5

0

738

 

/glast/admin

0

108

 

/glast/ASP

0

19072

 

Breakdown of space usage by Level 1 data products (from dataCatalog):

Filetype:

TotSize/#Runs =

Avg file size

(%)

RECON:

399257.600/13532 =

29.505 GiB

(58.2%)

CAL:

100147.200/13536 =

7.399 GiB

(14.6%)

SVAC:

71270.400/13535 =

5.266 GiB

(10.4%)

DIGI:

61542.400/13537 =

4.546 GiB

( 9.0%)

FASTMONTUPLE:

28262.400/13537 =

2.088 GiB

( 4.1%)

MERIT:

22528.000/13537 =

1.664 GiB

( 3.3%)

GCR:

676.800/13537 =

51.196 MiB

( 0.1%)

FASTMONTREND:

432.800/13537 =

32.739 MiB

( 0.1%)

DIGITREND:

337.100/13537 =

25.500 MiB

( 0.0%)

FILTEREDMERIT:

324.900/ 7306 =

45.538 MiB

( 0.0%)

MAGIC7HP:

244.700/13315 =

18.819 MiB

( 0.0%)

LS1:

183.700/13537 =

13.896 MiB

( 0.0%)

CALHIST:

181.400/13536 =

13.723 MiB

( 0.0%)

RECONTREND:

172.700/13537 =

13.064 MiB

( 0.0%)

TKRANALYSIS:

172.400/13536 =

13.042 MiB

( 0.0%)

LS3:

156.800/13537 =

11.861 MiB

( 0.0%)

RECONHIST:

124.400/13536 =

9.411 MiB

( 0.0%)

MAGIC7L1:

55.000/13315 =

4.230 MiB

( 0.0%)

FASTMONHIST:

53.400/13535 =

4.040 MiB

( 0.0%)

LS1BADGTI:

48.200/ 7306 =

6.756 MiB

( 0.0%)

CALTREND:

45.500/13537 =

3.442 MiB

( 0.0%)

FT1:

39.100/13537 =

2.958 MiB

( 0.0%)

DIGIHIST:

32.600/13537 =

2.466 MiB

( 0.0%)

etc...

 

 

 

Replacing Existing Disks

New Vendors

 Two obvious candidates are in use at the Lab now: DDN by LCLS, and Dell el cheapo by ATLAS.

DDN
Dell

These are $160/TB, but low density.

Change Storage Model

 The idea is to use HPSS as a storage layer to transparently retrieve xrootd files on demand. Wilko thinks the system can push 30-50 TB/day from tape. This is comparable to the rate needed for a merit reprocessing and so is thought not to be a major impediment. In this model, we would remove the big files from disk after making 2 tape copies. They would be retrieved back into a disk buffer when needed. So we would have a relatively fixed disk buffer, and a growing tape presence.

  • No labels