-
Created by Unknown User (wbfocke), last modified on Oct 19, 2007
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
« Previous
Version 6
Next »
- xrootd
- requires more bookkeeping because it has no "ls"
- I don't trust it - I'm aware that it's not intended to be a drop-in replacement for a disk-based filesystem, and I'm still trying to understand and internalize how it differs from one.
- Combine steps
- reduces ability to roll back errors
- increases latency
- Varying crumb size
- makes lots of small crumbs, so digi files get read many times
- Varying chunk size
- lots of small chunks mean more jobs are reading in parallel at the start of processing, but it does not increase the amount of data that's read
- Use scratch disks
- need to be able to leave files on scratch for a couple of hours without having a process running
- need to be able to copy files between batch machines with a process only at the receive end of the transfer
- scalable
- But it doesnt have to scale, it just has to work. It's not like we're going to get mentioned on slashdot and suddenly have100x the data flowing in.
- Not stage files stored on AFS to/from scratch
- AFS' internal caching means that we are copying the data twice.
- This may be particularly useful for recon, where crumb jobs don't even use the whole file.