Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents
maxLevel1

Access to data

...

The FFB system is designed to provide dedicated analysis capabilities during the experiment. After one week from the end of the experiment, the data files will be deleted from FFB and will be available only on one of the offline systems (psana, SDF, or NERSC).Note 1:

...

  • The FFB currently offers the fastest file system (WekaIO on NVME disks via IB HDR100) of all LCLS storage systems

...

  • however it size is only about 400 TB.
  • Typically all xtc data will be kept on the FFB a week after an experiment ends however for data intensive experiments files will be purged before the week ends maybe even before the experiment ends.
    Files deleted from the FFB will be available only on one of the offline systems (psana, SDF, or NERSC).
  • The raw data are copied to the offline storage system and to tape immediately, i.e. in quasi real time during the experiment, not after they have been deleted from FFB.
  • The users generated data created in the scratch/ folder are moved to the offline system when the experiment is deleted from the FFB described in Lifetime of data on the FFB.

...

  • For the time being, the new FFB system will be available only for FEH experiments. NEH experiments will still rely on psana resources.

You can access the FFB system from pslogin, psdev or psnx with:

...

Code Block
export SIT_PSDM_DATA=/cds/data/drpsrcf

The experiment folder names are the same as what you would expect in the offline and are described in the data retention policy.

Besides the xtc/  folder for the raw data the scratch/  folder allows user to write their processing output. This folder will be moved to the offline filesystem after an experiment is done. The calib/ is a link to the offline calib folder.

FFB SLURM partitions

You can submit your fast feedback analysis jobs to one of the queues shown in the following table. The goal is to assign dedicated resources to up to three experiments for each shift. Please contact your POC to get one of the high priority queues, 1, 2 or 3, assigned to your experiment.

...

The FFB system uses SLURM for submitting jobs - information on using SLURM can be found on the Submitting SLURM Batch Jobs page.

...

The same permission, based on ACLs,  as used for the Lustre analysis file-systems are used for the FFB. However, there is an issue with the current version of the file system:

  • the umask is applied when creating files and directories which violates the  ACL specs. As the default umask is 022 the group write permission will be removed. We recommend to set ones umask to:
Code Block
% umask 0002

Anchor
cleanup
cleanup
Lifetime of data on the FFB

xtc folder

  • xtc files are immediately copied to the offline filesystem
  • the lifetime on the ffb is dictated by how much data is generated
    • typically files stay on the ffb during the run-time of an experiment
    • however if space is need files from previous shifts might get purged
    • after an experiment is done the ffb should not be used anymore except if discussed with the POC

...

  1. scratch folder is made non accessible to users.
  2. files and directories below the ffb scratch/ are moved to the scratch/ffb/ on the offline filesystem: 
        /cds/data/psdm/<instr>/<expt>/scratch/ffb/ 
    except for hdf5 files in the smalldata folder (see next).
  3. hdf5 files below scratch/hdf5/smalldata/  are moved the the hdf5/smalldata/ folder on the offline filesystem, e.g.
    /cds/data/drpsrcf/mfx/mfx123456/scratch/smalldata/*.h5  -> /cds/data/psdm/psdm/mfx/mfx123456/hdf5/smalldata/
    (only .h5 files are moved to the hdf5/smalldata other files will be moved below scratch/ffb/

FFB File Permissions

The same permission, based on ACLs,  as used for the Lustre analysis file-systems are used for the FFB. However, there is an issue with the current version of the file system:

  • the umask is applied when creating files and directories which violates the  ACL specs. As the default umask is 022 the group write permission will be removed. We recommend to set ones umask to:
Code Block
% umask 0002

Access to Lustre ana-filesystems

...