- xrootd
- requires Requires more bookkeeping because it has no "ls".
- I don't trust it - I'm aware that it's not intended to be a drop-in replacement for a disk-based filesystem, and I'm still trying to understand and internalize how it differs from one.
- Could stage input or output files in parallel.
- Combine steps
- reduces Reduces ability to roll back errors.
- increases Increases latency.
- Varying crumb size
- makes Makes lots of small crumbs, so digi files get read many times.
- Varying chunk size
- lots Lots of small chunks mean more jobs are reading in parallel at the start of processing, but it does not increase the amount of data that's read.
- If there are more chunks than available cores, that automatically throttles I/O somewhat.
- Use scratch disks
- need Need to be able to leave files on scratch for a couple of hours without having a process running.
- need Need to be able to copy files between batch machines with a process only at the receive end of the transfer.
- Scalable scalable
- But, maybe, it doesn't have to scale, it just has to work. It's not like we're going to get mentioned on slashdot and suddenly have100x the data flowing in.
- Not stage files stored on AFS to/from scratch
- AFS' internal caching means that we are copying the data twice.
- This may be particularly useful for recon, where crumb jobs don't use the whole input file.
- PROOF
- seems Seems deeply tied to xroot - needs xrootd to run on the batch host?
...
{"serverDuration": 47, "requestCorrelationId": "d86315b5af48e3be"}