Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Transfering large datasets

This is the old way:

To request an import of a large dataset to SLAC (it must be avilable first at BNL!):

...

(the --archive flag makes sure it doesn't automatically get deleted after a week)

It will take some time for the data to appear. You can check with:

Code Block

dq2-ls -f <dataSet>

to see how many files are available locally.

The same works with There's similar code that works with DQ2 containers:

Code Block
dq2-register-subscription-container --archive  data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37/ SLACXRD_USERDISK
dq2-list-dataset-replicas-container data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37/
The new way:

Go to:

Panel

http://panda.cern.ch:25980/server/pandamon/query?mode=ddm_req

You'll have to first register once with Panda and have your GRID certificate approved by the system (make sure you have it imported into your Firefox browser)

...

It will take some time for the data to appear. You can check with:

Code Block
dq2-ls -f -H data09_cos.00121416.physics_L1Calo.merge.DPD_CALOCOMM.r733_p37/
<dataSet>

to see how many files are available locally.

And you can make a PoolFileCatalog.xml file directly:

...