LCLS Control Software Meeting Minutes, September 21, 2006

Contents

Unknown macro: {maketoc}

Attendees:

Arturo Alarcon

Debbie Rogind

Diane Fairley

Doug Murray

Hamid Shoaee

Kristi Luchini

Mike Stanek

Mike Zelazny

Patrick Krejcik

Sergei Chevtsov

Sheng Peng

Jim Knopf

Stephanie Allison

Stephen Norum

Stephen Schuh

Steve Lewis

Till Straumann (absent)

Mike Browne

Dayle Kotturi

Terri Lahey

Jingchen Zhou

Agenda:

  1. EPICS Startup Screen - Invitation to Compete
  2. Network and Server Status Update for Operations

Minutes:

  1. Hamid started today's meeting with a reference to a previous meeting. He spoke of the need for a main startup screen for our control software.
    1. He announced a competition for the creation of the initial, startup application. The prize would be a XXXXXXX of XXXXXXXXXX .
    2. It would be used to launch all LCLS related applications, and could itself be started as a standalone program or from elsewhere including an SCP display.
    3. He showed some examples from SLAC's SCP screens, and from the SNS startup display.
    4. Ideally, the new startup screen would provide access to all geographic regions and to any functional aspect of operation, either in graphic form or just as text.
    5. Any format for the display is acceptable, although EDM is preferable; it could be in JAVA, TCL/TK, C++, COBOL.
    6. For the competition, he said the team would decide the winner.
    7. The work is due by Monday, October 16, and should be able to support safety systems, existing EPICS tools, the Archiver, alarms, high level applications and more.
    8. Hamid also suggested we could discuss the control panel hierarchy and other startup applications at a future meeting.
  2. Terri then gave an update to the plans for network and server support, relating to the upcoming commissioning schedule.
    1. See slides for complete list of topics covered, and the network architecture. Items that were discussed follow.
    2. It was pointed out that the DMZ would be a "SLAC-only" subnet; it would only be visible to other SLAC sites, and would only be able to directly reach other SLAC sites. It will not be directly on the Internet. This is similar to our lcls-dev server. Most machines on this subnet will be Taylored with AFS access, and has a maximum of 256 nodes. It will include Control Room workstations (operator stations), at least 2 application servers, running Red Hat Linux, Enterprise Edition4.
    3. MATLab will work on the Application servers or workstations.
    4. Terri mentioned that LCLS Logbooks are defined on MCC Elog, and an LCLS area is defined in Artemis (used for trouble reporting and accelerator planning).
    5. Kristi asked about the location of the Oracle server. Jingchen said it is in the MCC computer room. It is used for ELog. Terri: The Oracle daemon is installed locally, so it does not need AFS access. Login to the machine by system administrators for debugging does need AFS access.
    6. Hamid asked if IRMIS support would be there as well. Terri said the current IRMIS is on SSCS servers. If needed, we can move the tables to MCC ORACLE server.
    7. Do we have plans to integrate Andrea's Database. Terri: We have not received any requirements. Let us know what is needed.
    8. Stephanie asked if physicists with LINUX laptops would access our IOCs from outside via the gateway (Answer: Yes); Will Channel Access be used from the DMZ to MCC CA Server to access MCC database? (Answer: available if needed. MCC will have NIC on LCLSDMZ)? Aida will also be used. Can LCLS write via AIDA (Hamid: Greg is implementing this) A proxy for SLC IOCs needs to be opened to allow access to PX00 (yes). Also a laptop on the Utility subnet is to use the PV gateway.
    9. Terri mentioned there were now 14 SLC aware IOCs on the list, with more coming. Keep Steph and Nancy Spencer informed of any new SLCaware IOC. Some lead time is needed to define them in the MCC database, and in some cases, the number of SLCaware IOCs is limited (due to the mapping of devices in the accelerator).
    10. Sheng asked if other Network Accessible Devices such as PLCs, motion controllers or oscilloscopes would reside on the Utility subnet. Printers cannot be on the same subnet. We decided to create a subnet named LCLSINST for instruments. Do the terminal servers and crate power move to LCLSINST?
    11. Kristi asked what criteria would be used to determine nodes on the Utility subnet. Can a router be configured to allow direct access from a Utility subnet node from a Channel Access node? Terri: all the 172.27 subnets route between each other and to LCLSDMZ and LEB.
    12. Steve: Are we using routers or switched vlans for traffic between 172.27 subnets? Terri: will check. What are the timing requests and for what subsystems/traffic? Talk about this more offline.
    13. It was pointed out that an isolated switch would be used for non-IP dedicated communication with LLRF and BPM equipment. Sheng: Can we disable CISCO traffic or use a cheap switch? Terri: Using CISCO switches since they are used throughout SLAC. I can check on the plan for disabling management traffic.
    14. How do the IOC console connect to the terminal server: Steve: green lines are the serial console connection. Steve asked how one would access the terminal servers to gain access to an IOC console. Can there be a nonprivileged account on the terminal servers to view ports? Terri: Does the group want to use iocConsole? Will check on accounts.
    15. Dayle asked if thought had been given to the CA_ADDR_LIST environment variable in the designs. Terri: Yes.
    16. Kristi asked if non-Channel Access devices could be categorized separately. Sheng suggested it could be a separate VLAN from the CA one. Terri: Yes. LCLSCA is for CA devices.
    17. Stephanie asked about writing files from the DMZ subnet? Stephanie said she was thinking specifically about the AFS tokens expiring after several hours. Terri: a small working group is deciding NFS plans now. Jingchen is working with LCLS on this offline. We can bring solution to a future meeting for discussion.
    18. A question arose regarding access to web-based interfaces to VME crates, PLCs, power reboot units, terminal servers and other network devices. Terri suggested we could discuss this offline.
    19. There was a question to confirm that the LINUX servers will be 32 bit versions working on 64 bit machines. Jingchen said that was the case, and would be until we were ready to change.
    20. Terri mentioned that we should all be aware that physicists want to use Windows based PCs. Hamid confirmed that the group was aware, and mentioned they will have LINUX machines too.
    21. Steve asked about using different port numbers for development and testing, from those in the operations environment. Terri: Yes. IOCs use standard production port numbers that are different from development port numbers. The PV gateway and softIOCs use non-standard production port numbers. MCC also does this for PEP. Question: what other subsystems need non-standard production ports?
    22. It was mentioned during Hamid's presentation that the command server was currently only compiled to run on Solaris. We will port to LINUX if LCLS wants this. (Note this is needed if any SCP panel (index or functional panel) wants to launch an EPICS display.)
    23. Doug asked where EDM would typically execute. Jingchen said on a control room workstation, although it could run on an application server as well.
    24. SLCaware IOCs and workstations have been added to LCLSDMZ. But wherever possible, put IOCs and other nodes on the private subnets (172.27.0.0).
    25. Dayle said booting from AFSNFS2 often fails. Doug: who has trouble booting from AFSNFS2? Terri: If you can put your nodes on the private subnets and boot from MCCNFS, do this. We are creating a plan to allow you to move IOCs back to booting from AFS, if needed.
    26. Terri then described a migration plan from AFS to standalone operation, at a future date. Kristi asked about Configuration files? Terri: The configuration files for applications are listed in the EPICS application section. Note: config files for the HLA config system are on MCC.
    27. She mentioned that eventually none of the workstations would be Taylored. We would have some servers that are taylored. Are there any SCCS functions needed? Mike Stanek noted that browsers sometimes need to download plugins.
    28. Kristi asked about CMLog. Terri said it is being worked on.
    29. Stephanie said that IOCs would need DNS and NTP. Terri said they would be available, and currently exist on MCC.
    30. Patrick asked how backups would be done. Terri said there was no need since most of the data is copied from originals that are backed up.
    31. Stephanie asked about acquired data, and Jingchen suggested a backup scheme would be implemented for that. Terri said we could discuss the details offline.
    32. Kristi mentioned that some Linux machines will need access to serial ports, which Taylor currently disables. A recent fix from SCCS is documented on the Wiki. Terri: What is this needed for?
    33. Sheng suggested that read-only access to Cisco switch settings would be very useful. Terri will check.
    34. Doug suggested we allocate a pool of IP addresses for quick deployment of test or calibration devices, since the current form with signature seems too time consuming. Terri said that wasn't their policy. We can allocate groups of node/addresses for production nodes.
    35. Terri mentioned that SunRay screens are working now, and encouraged people to come to MCC and test them.
    36. Doug asked about the status of the Linux workstation, and Jim said they have been working, but are not currently available. He will get them online within a week.
    37. Diane asked if there would be room for laptops, since the physicists would like to run MATLab programs. Terri said that wireless access is available. Hamid said that Linux laptops would have space available inside the control room and in the foyer.
    38. Hamid also mentioned that control engineers would need to be available during day shift. Mike asked about the details of the work, and if other duties could be performed. Hamid said the details would be forthcoming, but engineers would indeed be able to do their regular work when not engaged in commissioning work.
    39. Kristi asked if history buffers would be used, or the archiver? Hamid said that was yet to be determined.
    40. Terri mentioned that a separate MATLab application server might be required.
    41. Terri: what Ethernet traffic is your subsystem using, and between which types of nodes?
    42. Archive viewer is built, servers are configured. Final network cable is being terminated at MCC so we can put these on the network. Archive viewer will be tested standalone. PX00 testing is ongoing.
    43. See slides for other status and other topics
  • No labels