Please be aware that the system administration team reserves the right for computer system maintenance on the 1st Wednesday of each month. Computers and storage systems might experience short and announced outages during these days.
You can get into the LCLS photon computing system in the NEH by ssh'ing to one of these nodes:
psexport.slac.stanford.edu
psimport.slac.stanford.edu
From these nodes you can move data files in and out of the system and you can connect to the bastion hosts:
pslogin
psdev
Note that, from within SLAC, you can directly connect to the bastion hosts without going through psimport/psexport
.
The SLAC wireless visitor network is not considered part of SLAC so you'll need to go through psexport/psimport
when using your laptop on-site.
From the bastion hosts you can then reach the analysis nodes (see below).
Each control room has a number of nodes for local login. These nodes have access to the Internet and are reachable from pslogin
and psdev
.
The controls and DAQ nodes used for operating an instrument work in kiosk mode so you don't need a personal account to run an experiment from the control room. Remore access to these nodes is not allowed for normal users.
You will need a valid SLAC UNIX account in order to run your analysis in the NEH system. This UNIX account must be enabled in the NEH system in order to grant access to data and elog. The instructions for getting a SLAC UNIX account are here:
http://www-ssrl.slac.stanford.edu/lcls/users/logistics.html#compaccts
The elog is accessible at the following location:
https://pswww.slac.stanford.edu/apps/logbook
Each user is allowed to view and edit only the experiments she/he belongs to. The elog can also be accessed through the experiment shared account. The PI for the experiment is the custodian of the password of the experiment shared account and he/she can share it with the members of the experiment group. The name of the shared account is the same as the experiment's name.
These nodes are reserved for the users who are currently running an experiment. Each instrument has three dedicated interactive compute systems:
psanaamo01
psanaamo02
psanaamo03
psanasxr01
psanasxr02
psanasxr03
psanaxpp01
psanaxpp02
psanaxpp03
psanaxcs01
psanaxcs02
psanaxcs03
psanacxi01
psanacxi02
psanacxi03
psanamec01
psanamec02
psanamec03
The general specifications for these nodes are:
psana<instr>01
: 8 cores, Xeon E5520, 24GB, 500GB disk, 1Gb/s, dedicated Matlab licensepsana<instr>02
: 8 cores, Xeon E5520, 24GB, 500GB disk, 1Gb/spsana<instr>03
: 8 cores, Opteron 2384, 8GB, diskless, 10Gb/sIn order to get access to the interactive farm, ssh to the address psana
. A load-balancing mechanism will connect you to the least loaded of the nodes in the farm. This farm is currently made of six 8-cores Opteron 2384 nodes with one 10Gb/s connection to the data.
Login first to psdev
or pslogin
(from SLAC) or psimport
or psexport
(from anywhere). From there you can submit a job with the following command:
bsub -q lclsq -o <output file name> <job_script_command>
For example:
bsub -q lclsq -o ~/output/job.out my_program
This will submit a job (my_program
) to the queue lclsq
and write it's output to a file named ~/output/job.out
.
You may check on the status of your jobs using the bjobs
command.
The batch farm is made of sixty 8-cores Xeon E5520 with 1Gb/s connection to the data.
For a more detailed description and more more LSF commands, please see
http://www.slac.stanford.edu/comp/unix/unix-hpc.html
LCLS provides space for all your experiment's data at no cost for you. This includes the measurements as well as derived data from your analysis software.
Your data are available as XTC files or, on demand, as HDF5 files.
All your data is available on disk for one year after data taking. The path name is /reg/d/psdm
. The data files are currently stored in a Lustre file system. Each experiment is allocated three directories: xtc, scratch and hdf5. The xtc directory contains the raw data from the DAQ system. Its contents are archived to tape. The scratch and hdf5 directories are not backed up. Please write the output of your analysis to the scratch area and not in your NFS space. Keep your analysis code under your NFS home or under your NFS group space (if you have one). Your NFS space is backep up.
After one year, your data files are removed from disk. The XTC files remain stored on tape for up to 10 years. LCLS may restore your data from tape back to disk for you to access. Restoring the data to disk more than once will require the approval of the LCLS management.
There is a web interface to the experimental data accessible via https://pswww.slac.stanford.edu/apps/explorer/
The web interface also allows you to generate file lists that can be fed into bbcp
to export your data from SLAC to your home institution. You can use psexport
or psimport
for copying your data.
See the DataExportation page for more information.
The following printers are available in the NEH building from all the UNIX nodes:
Info |
Location |
Device URI |
---|---|---|
Dell 3130 |
AMO Control Room |
lpd://dellcolor-neh-amo1/lp |
Dell 3130 |
AMO Control Room |
lpd://dellcolor-neh-amo2/lp |
Dell 3130 |
SXR Control Room |
lpd://dellcolor-neh-sxr1/lp |
Dell 3130 |
SXR Control Room |
lpd://dellcolor-neh-sxr2/lp |
Dell 3130 |
XPP Control Room |
lpd://dellcolor-neh-xpp1/lp |
Dell 3130 |
XPP Control Room |
lpd://dellcolor-neh-xpp2/lp |
HP Color LaserJet CP3525 |
Bldg 950 corridor ground floor |
ipp://hpcolor-neh-corridor/ipp/ |
Xerox WorkCentre 5675 |
Bldg 950 Rm 218, Jason Alpers |
ipp://hpcolor-neh-laser/ipp/ |
HP Color LaserJet 4700 |
Bldg 950 Rm 204, Ray Rodriguez |
ipp://hpcolor-neh-ray/ipp/ |
HP LaserJet 4350 |
Bldg 950 Rm 203 |
ipp://hpcolor-neh-srvroom/ipp/ |
The first line of help is available by sending email to pcdshelp@slac.stanford.edu.
If you have problems with a specific piece of the analysis software, please have a look at our bug-tracking system https://pswww.slac.stanford.edu/trac/psdm.
In case of an emergency that affect your data taking ability, please ask the instrument scientist or the floor coordinator. They have a contact list with all the people in the PCDS group.