To get started you must at least know the name of the Monte Carlo task with the data you need. You likely learned this name from a meeting, a colleague or you may browse here for list of recent Service Challenge Monte Carlo datasets, which also includes some basic configuration information for listed tasks.

There has been a transition from storing these data on /nfs disk to storing them in xrootd, a file server system that is both easier to manage and offers tape backup - something GLAST has been planning to do for quite a long time. In addition, routine data concatenations are no longer being done, as it can be done simply upon demand using the skimmer or by reading the files directly from xrootd (if the job runs at SLAC). You may find the two scenarios of some help.

  1. If you are running ROOT locally at SLAC (and this includes accessing ROOT functions from Python), you may access the files directly and TChain them together in your analysis. Find detailed instructions in this Data Access FAQ: http://ganglia01.slac.stanford.edu:8080/ganglia/glast/?r=day&c=glastlnx&h=glastlnx22.slac.stanford.edu
  2. If you need to download the actual data to your laptop or home institution, then you may download all (or some) individual data files or you may first "skim" them to concatenate and/or apply cuts to the data, likely producing a smaller number of files.
    1. to download all files in a task, navigate to the appropriate task and data type in the dataCatalog, http://glast-ground.slac.stanford.edu/DataCatalog (e.g., MC-Tasks/ServiceChallenge/backgnd-GR-v13r9-Day/runs/merit), and then click on "Download Files" and follow the prompts.
    2. to first concatentate a typically large number of files into a smaller number prior to downloading,
      1. Find the task and data type of interest in the dataCatalog and click on "Skim Files"
      2. Skim the data with any desired cuts, "TCut" (or no cuts if you simply wish a concatenation of the data). Note that you may optionally specify the families of merit ntuple columns to store in your output - useful if you don't need all the data and wish to minimize the space required to store the result, and increase the speed of processing.
      3. The skimmer runs as a parallel pipeline job and you will receive an email when it completes, allowing you to ftp the resultant files to your local storage.