Many thanks to Wei Yang for creating and documenting use of the container. Note that Singularity rebranded itself to Apptainer. So you can use those names interchangeably at present.

Singularity image files: 

  • /gpfs/slac/fermi/fs2/software/containers/slac-fermi.img.ext3
    • This is an old, tested image
  • /gpfs/slac/fermi/fs2/software/containers/fermi-centos6.20230314.sif
  • on s3df, these containers can be found in: /sdf/group/fermi/sw/containers/
  • They are available on both the AFS side and SDF side. You can copy them to other locations
  • Instructions for how to build the container are in: /gpfs/slac/fermi/fs2/software/containers/fermi-rhel6.build.txt

For S3DF:

  1. export myimage=/sdf/group/fermi/sw/containers//fermi-rhel6.sif. That is symlinked to the "prod" container in that directory.
  2. Go inside the container:
    1. singularity shell -B /sdf $myimage
    2. By default, the prompt is changed to “Apptainer> “. You are now in a shell and can do “ls”, “cd”, etc. 
    3. If you run command “id”, you will see you are running as yourself.
    4. if you need to see SDF filesystems (Lustre), add -B /fs/ddn/sdf
  3. Run a command in the container:
    1. singularity exec -B /sdf $myimage ls -l 
    2. singularity exec -B /sdf $myimage sh myscript.sh
    3. The above is usually used in batch jobs.
      1. to work on the scratch space in batch, add -B /lscratch
    4. if you need to see SDF filesystems (Lustre), add -B /fs/ddn/sdf



How to run Singularity container from rhel6-64, centos7 and sdf-login:


  1. export myimage=/gpfs/slac/fermi/fs2/software/containers/slac-fermi.img.ext3
    1. or use the new image file:
    2. export myimage=/gpfs/slac/fermi/fs2/software/containers/fermi-centos6.sif
  2. Go inside the container:
    1. singularity shell -B /afs:/afs -B /gpfs:/gpfs $myimage
    2. By default, the prompt is changed to “Apptainer> “. You are now in a shell and can do “ls”, “cd”, etc. 
    3. If you run command “id”, you will see you are running as yourself.
  3. Run a command in the container:
    1. singularity exec -B /afs:/afs -B /gpfs:/gpfs $myimage ls -l /afs
    2. singularity exec -B /afs:/afs -B /gpfs:/gpfs $myimage sh /afs/slac/myscript.sh
    3. The above is usually used in batch jobs.
  4. “-B /afs:/afs” means bind mount (make /afs available inside the container, as path /afs)
    1. Bind mount may not work well when running singularity on rhel6-64 machines, especially when autofs path (e.g. /nfs) is involved 
    2. By default, /tmp and your working directories are bind mounted.
    3. on centos7, you can also bind mount nfs similarly.


How to build a RHEL6 Singularity container

Directory /sdf/group/fermi/sw/containers/ hosts most of the files needed to build a RHEL6 singularity container used by Fermi. The command to build a container is

  1. sudo singularity build new.image.sif fermi-rhel6.singularity.def or
  2. sudo singularity build --sandbox new.image.dir fermi-rhel6.singularity.def # this is an "image" in a directory that you can "cd" into and manually making changes.

The singularity definition file fermi-rhel6.singularity.def depends on docker rhel6 image, and SLAC's RHEL6 yum repo. Both could go away in the not too far future. When that happens, we will only be able to build new image from existing images. A definition file to do that looks like this;

Bootstrap: localimage
From: /sdf/group/fermi/sw/containers/fermi-rhel6.sif
%post


page for collecting questions/issues and answers

  • No labels

5 Comments

  1. Everything Many things I have are set up for cshell, but when I try to change shells within the container it asks for my password and then gives me an authentication error.  I have tried multiple times making absolutely certain I'm giving it the correct password, any ideas?

  2. Following the example at: https://sdf.slac.stanford.edu/public/doc/#/interactive-compute

    it seems the command syntax in step 2 above can be simplified to:

    • singularity shell -B /afs,/gpfs,/nfs $myimage
  3. Recently (as of yesterday) when using the container, I get a warning (see below) about the apptainer not being complete.  Everything seems to run, but I wonder if I need to update some command or point to a different file?


    [tyrelj@cent7a ~]$ setenv myimage /gpfs/slac/fermi/fs2/software/containers/slac-fermi.img.ext3
    [tyrelj@cent7a ~]$ singularity shell -B /nfs:/nfs -B /afs:/afs -B /gpfs:/gpfs $myimage
    WARNING: /etc/singularity/ exists, migration to apptainer by system administrator is not complete

  4. Using the old image I now get a FATAL error:

    [tyrelj@cent7d ~]$ setenv myimage /gpfs/slac/fermi/fs2/software/containers/slac-fermi.img.ext3
    [tyrelj@cent7d ~]$ singularity shell -B /nfs:/nfs -B /afs:/afs -B /gpfs:/gpfs $myimage
    INFO:    /etc/singularity/ exists; cleanup by system administrator is not complete (see https://apptainer.org/docs/admin/latest/singularity_migration.html)
    FATAL:   configuration disallows users from mounting extfs in setuid mode, try --userns


    (note, this is on centos7 machines as I haven't set everything up on the new cluster yet)

    I don't get the same message when using "export myimage=/gpfs/slac/fermi/fs2/software/containers/fermi-centos6.sif", so maybe the old image is no longer valid from the command line (bash scripts using it seem to have worked).

    1. Using --userns doesn't work, apparently things aren't set up to allow that.  I can't even remove all the -B statements (did as a test) without getting the exact same error.  Using 'exec' instead of 'shell' still seems to work, so it seems that I'm going to have to change some interactive steps to scripts because the new image doesn't have everything I need, somehow.