...
Date | uname -a | Notes |
---|---|---|
7/13/2018 | 3.10.0-862.6.3.el7.x86_64 | |
8/8/2018 | 3.10.0-862.9.1.el7.x86_64 | |
8/28/2018 | 3.10.0-327.el7.x86_64 | Fresh install by ksa |
8/29/2018 | 3.10.0-862.11.6.el7.x86_64 | |
10/4/2018 | 3.10.0-862.14.4.el7.x86_64 | |
12/7/2018 | 3.10.0-957.1.3.el7.x86_64 | |
2/19/2019 | 3.10.0.957.5.1.el7.x86_64 | Fresh net install on new SSD |
4/1/2019 | 3.10.0-957.10.1.el7.x86_64 | |
5/15/2019 | 3.10.0-957.12.2.el7.x86_64 | $ sudo yum upgrade ; failure of yfs, so (via ksa)... $ sudo yum clean all;sudo yum erase kmod-yfs;sudo yum install kmod-yfs;sudo yum upgrade |
6/14/2019 | 3.10.0-957.21.2.el7.x86_64 | Automatic upon reboot (after notifications) |
9/24/2019 | 3.10.0-1062.1.1.el7.x86_64 | |
12/2/2019 | 3.10.0-1062.4.3 | |
12/4/2019 | 3.10.0-1062.7.1 | |
1/6/2020 | 3.10.0-1062.9.1 | |
2/10/2020 | 3.10.0-1062.12.1 |
Disk Partitioning
The following table indicates a "standard" suggested disk partitioning for centos7 with a 1 TB SSD. (Note: the machine, comet2, has 16 GB of RAM.)
Currently recommended partition sizes are in blue.
Partition | Type | Size (GB) | Red Hat guideline | Usage as of 3/12/2020 | Red Hat guideline | encrypt? | Notes |
---|---|---|---|---|---|---|---|
/boot | ext4 | 2 | .33G (19%) | >1 GB | |||
/ | ext4 | 30 | 11G (36%) | >10 GB | root | ||
/home | ext4 | 30 | 23G (80%) | >1 GB | local user $HOMEs | ||
swap | 8 | >1 GB | calculation based on amount of RAM | ||||
/opt | ext4 | 40 | .75G (2%) | 3rd party software | |||
/tmp | ext4 | 10 | 0.04G (1%) | don't let this fill up! | |||
/var | ext4 | 10 | 2.1G (23%) | logs | |||
/scratch | ext4 | 300 | 38G (14%) | yum! | |||
/scswork | ext4 | 10 | 0.04G (1%) | maybe combine with / ? | |||
/usr/vice/cache | ext4 | 5 | 0.1G (3%) | AFS/YFS only | |||
/afs | auristorfs | --- | N/A | empty mount point (AFS/YFS only) | |||
Here is comet2's current disk config (on a 160 GB HDD):
...
Here is a list of gotchas or concerns that I stumbled into during these project investigations.
...
Absolute NFS file paths will be different. Using sshfs means every remote file system must have a local mount point. On central SLAC machines, "/nfs" works. However, sshfs documentation recommends that mount points be r/w by the user and, usually, /nfs is not such a candidate. So any scripts or aliases that use the "/nfs" path must be changed. [AFS/YFS is different in that if you elect to have the client installed, the absolute paths will look identical with that on a public SLAC machine.]
** WORKAROUND: On a single-user workstation in the SLAC network, the following example shows how to allow a customary absolute NFS path using a symbolic link:
Code Block |
---|
sudo ln -s /nfs /home/dragon/nfs
mkdir -p /home/dragon/nfs/farm/g/lsst
sshfs dragon@rhel6-64:/nfs/farm/g/lsst /nfs/farm/g/lsst |
...
Code Block |
---|
sudo ln -s /afs/slac.stanford.edu/u /u |
...
Code Block |
---|
alias person='ssh rhel6-64 person ' |
...
At this time (1/7/2020), updating YFS without a concurrent OS kernel update may fail due to an issue with the kmod-yfs library. The workaround is:
Code Block sudo yum erase kmod-yfs-0.190-1.3.10.0_1062.9.1.el7.x86_64 # (substitute your current version) sudo yum update # or "yum upgrade"
- Tilde (~) does not work. Remember that LD2.0 machines have their own user databases which are not the same as the SLAC site unix user database. If you are accustomed to typing "$ ls ~lsstprod/workflows", that will no longer function. It is not clear how to implement a good, reliable work-around.
Absolute NFS file paths will be different. Using sshfs means every remote file system must have a local mount point. On central SLAC machines, "/nfs" works. However, sshfs documentation recommends that mount points be r/w by the user and, usually, /nfs is not such a candidate. So any scripts or aliases that use the "/nfs" path must be changed. [AFS/YFS is different in that if you elect to have the client installed, the absolute paths will look identical with that on a public SLAC machine.]
** WORKAROUND: On a single-user workstation in the SLAC network, the following example shows how to allow a customary absolute NFS path using a symbolic link:Code Block sudo ln -s /nfs /home/dragon/nfs mkdir -p /home/dragon/nfs/farm/g/lsst sshfs dragon@rhel6-64:/nfs/farm/g/lsst /nfs/farm/g/lsst
Access to AFS home directories can proceed either via an absolute path, e.g., `/afs/slac/u/...` or one can create a symbolic link to recover the familiar `/u/ec/dragon/...` path.
Code Block sudo ln -s /afs/slac.stanford.edu/u /u
Lots of SLAC-written and SLAC-specific commands are no longer available locally, e.g., everything in /usr/local/bin
** WORKAROUND: Create an alias in your .bashrc to prefix your favorite SLAC command(s) with "ssh rhel6-64 ", e.g.Code Block alias person='ssh rhel6-64 person '
- Printing is currently possible via the unix print server, but I've heard rumors that this service might be deprecated and replaced with a Windows-based system. Also, the current print config in use on comet2 is very rudimentary and needs further thought. It does not, for example, know about printer-specific functions & capabilities, such as faxing, duplex printing, oddball paper sizes, etc.
** FIX: The "BrightQ" print drivers for Canon printers are straight-forward to install, interface seamlessly with CUPS, and offer all the features of my printer (a Canon C5255). There is a bit of a rigamorole involved (one must "register" twice, once for download and again for installation), but in the end it worked well. Get the drivers here: https://www.codehost.com/canon/ - Many users will need a moderately-to-highly customized application repertoire to work well for them. The application list above is acceptable for my (TG) work needs. But there are items that even I need only rarely and it is not clear it is better to seek them out and install locally, or to simply log into a public login machine to use. Here I am thinking of database tools, advanced development tools, TeX (and friends), more sophisticated printing capabilities, etc.
- While for may activities it is desirable to work locally, one will still need to log onto a public SLAC login machine (think licensed software, certain computing resource management functions, dealing with PPI, etc.) There are certain files and directories that I would like synchronized between the desktop machine and my SLAC environment (such as ssh keys, personal logbook, app configurations). Possibly a trscron job would do the trick, but then which copy becomes the master? I would like a smart synchronizer that allows either environment to make changes that will then be reflected in the other environment.
References
- SLAC minimum security requirements:
https://docs.slac.stanford.edu/sites/pub/Publications/701-I02-001-00_Min_Sec_Req_for_Comp.pdf Stanford minimum security requirements:
https://uit.stanford.edu/guide/securitystandardsSLAC support for Linux:
Ubuntu/CentOS 7 Desktop Scope of Support
References:
- SLAC minimum security requirements:
https://docs.slac.stanford.edu/sites/pub/Publications/701-I02-001-00_Min_Sec_Req_for_Comp.pdf Stanford minimum security requirements:
https://uit.stanford.edu/guide/securitystandardsSLAC support for Linux:
Ubuntu/CentOS 7 Desktop Scope of Support
import datetimeimport argparse
## These are the tracts of interest for the end-to-end data settracts = '3636,3637,3638,3639,3830,3831,3832,4028,4029,4030,4229,4230,4231,4232'#tracts = '3636'
## raftIDs are used when converting to/from detector numbersraftIDs = ['R01','R02','R03','R10','R11','R12','R13','R14','R20','R21','R22','R23','R24','R30','R31','R32','R33','R34','R41','R42','R43']
class overlap(object): ## Table overlap contains: [id,tract,patch,visit,detector,filter,layer] overlapSQL = "select distinct visit,detector,filter from overlaps where tract in ($1) order by visit,detector;" #TEST# overlapSQL = "select distinct * from overlaps order by visit,detector;"
def __init__(self,dbfile='tract2visit.db',tractList=None): print("Hello from overlap.init()") ## Instance variables self.dbfile = dbfile self.tractList = tractList print('dbfile = ',self.dbfile) print('tractList = ',self.tractList) self.dbInit = False return
def __del__(self): ## Class destructor self.con.close() self.dbInit = False return
def initDB(self): ## Open sqlite3 DB file and create cursor self.con = sqlite3.connect(self.dbfile) ## connect to sqlite3 file self.con.row_factory = sqlite3.Row ## optimize output format self.cur = self.con.cursor() ## create a 'cursor' self.dbInit = True return
def closeDB(self): self.con.close() self.dbInit = False return
def stdQuery(self,sql): if self.dbInit == False: return print('SQL = ',sql) ## Perform a query, fetch all results and column headers result = self.cur.execute(sql) rows = result.fetchall() # <-- This is a list of db rows in the result set ## This will generate a list of column headings (titles) for the result set titlez = result.description ## Convert silly 7-tuple title into a single useful value titles = [] for title in titlez: titles.append(title[0]) pass return rows,titles
def run(self): self.initDB()
rows,titles = self.stdQuery(self.overlapSQL.replace('$1',tracts))
self.closeDB() return rows, titles
def d2rs(detector): ## Convert a DM "detector number" to raft/sensor format, e.g., ## "R22" "S11" det = int(detector) if det > 189 or det < 0: raise Exception("Bad detector number") raft = int(det/9) raftID = raftIDs[raft] s1 = det%9 s2 = int(s1/3) s3 = s1 % 3 sensorID = f'S{s2}{s3}' return raftID, sensorID
def rs2d(raftID,sensorID): # Convert a raft sensor string of form "Rnn" and "Smm" to a DM # "detector number" (int from 0 to 188) raft = raftIDs.index(raftID) det = int(raft)*9+int(sensorID[-2])*3+int(sensorID[-1]) return det
if __name__ == '__main__':
## Define defaults defaultFile = 'tract2visit.db' ## Parse command line arguments parser = argparse.ArgumentParser(description='Generate sim file list based on tract overlap') parser.add_argument('-o','--overlapsFile',default=defaultFile,help='Name of overlap db file (default = %(default)s)') # parser.add_argument('-n','--nth',default=0,type=int,help='Desired visit, counting from beginning of sorted list (default=%(default)s)') parser.add_argument('-v','--version', action='version', version=__version__) args = parser.parse_args()
myo = overlap(dbfile='tract2visit.db',tractList=tracts) rows,titles = myo.run()
print('table columns = ',titles) #xtract=titles.index('tract') xvisit=titles.index('visit') xdet=titles.index('detector') xfilt=titles.index('filter') print('#sensor-visits returned = ',len(rows)) # Sample sim FITS file naming lsst_a_425529_R14_S00_y.fits fileList = [] visitList = [] tractList = tracts.split(',') print(tractList) n = 0 simpre='/global/projecta/projectdirs/lsst/production/DC2_ImSim/Run2.1.1i/sim/agn-test' ## Write out a file of sim filenames vz = open('vizList.txt','w') for rowz in rows: n += 1 row = list(rowz) #print('row = ',row)
viz = row[xvisit] if row[xvisit] not in visitList:visitList.append(viz) rr,ss = d2rs(row[xdet]) file = 'lsst_a_'+str(viz)+'_'+rr+'_'+ss+'_'+str(row[xfilt])+'.fits' viz8 = f'{viz:08}' if viz < 445379: file = os.path.join(simpre,'00385844to00445379',viz8,file) else: file = os.path.join(simpre,'00445379to00497969',viz8,file) pass vz.write(file+'\n') fileList.append(file) if n<20:print(file) pass
vz.close() print('There are ',len(visitList),' visits.') print('There are ',len(fileList),' files.')
#/global/projecta/projectdirs/lsst/production/DC2_ImSim/Run2.1.1i/sim/agn-test/00385844to00445379/00425529/lsst_a_425529_R14_S00_y.fits #/global/projecta/projectdirs/lsst/production/DC2_ImSim/Run2.1.1i/sim/agn-test/00445379to00497969/00457716/lsst_a_457716_R10_S10_i.fits
# for file in fileList: # print(file)