Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

IBM General Parallel File System (GPFS) is a high performance parallel filesystem  featuring storage virtualization, high availability and is designed to manage large amounts of file data,  You can find out more about GPFS in this introduction.

 

  • Checking your

    gpfs quota with /usr/local/bin/myquota

    In this example, user gtsai will check her GPFS quota from a linux command line.

    # /usr/local/bin/myquota

     

    Displaying quota usage for user gtsai

                                  --------- Space (KB) ---------
      FileSystem          FileSet            Usage Quota                    Path
    ---------------- ---------------- ------------------ ------------------  ----------------------------------
           fermi-fs2            gtsai             39,328 1,048,576 /gpfs/slac/fermi/fs2/u/gtsai
             des-fs1            gtsai             39,328 3,145,728 /gpfs/slac/des/fs1/u/gtsai
           kipac-fs1            gtsai             39,328 2,097,152 /gpfs/slac/kipac/fs1/u/gtsai
             scs-fs1            gtsai                480 10,485,760 /gpfs/slac/scs/fs1/u/gtsai
           staas-fs1            gtsai                  0 20,971,520 /gpfs/slac/staas/fs1/u/gtsai
           staas-fs1            gtsai                  0 20,971,520 /gpfs/slac/staas/fs1/u/gtsai

    GPFS quotas on the Atlas cluster

    By default all users of the atlas gpfs space get 100 GB in the u directory and 2 TB in the g dir.

    On hosts running native gpfs like the rhel6-64 cluster you can issue the following 2 commands to see your quota and space used:

    df -h /gpfs/slac/atlas/fs1/d/$USER
    df -h /gpfs/slac/atlas/fs1/u/$USER

    On other hosts that dont run the gpfs code, but do have nfs access, you can issue:

    df -h /nfs/slac/atlas/fs1/d/$USER
    df -h /nfs/slac/atlas/fs1/u/$USER
     

 

  • GPFS building block

          Below is a schematic of a typical SCS GPFS storage building block.  It includes (2) redundant two file servers, (2) redundant  two storage servers and (2) two storage arrays.   The two sets of servers operate as ACTIVE/ACTIVE, but also provide failover capability if needed. This example would provide 320 TB of space.  Local  iozone tests show max write ~4GB/sec, max read ~6GB/sec, with using large block sequential I/O.

...