Confluence will be unusable 23-July-2024 at 06:00 due to a Crowd upgrade.
Press 8/8
Closed for editing, 8/4
We had a few fixes and feature updates for the PMPS UI diagnostic tool in July.
The following environments were released during May, June, and July:
pcds-5.4.2 is the first environment with "gentler" dependency updates to minimize the potential for picking up unexpected behavior on update.
Note that any applications using pcds-5.3.1 should update if they plan to use hutch-python in an experiment setting. There is a bug in this version that can lead to dangerous results where the history of separate ipython sessions will get mixed during execution, so someone else's "move" command can end up in your history and it is very very easy to accidentally run their command instead of re-running yours.
We've been working on tracking down the reasons why various Python apps are slow to load and trying to minimize the parts we can control. To this end, we've already found a bunch of speed ups and work is ongoing. Some of these are tricky because, while startup speed is important, having a design tradeoff that adds slowness later can be just as bad.
A project page is opened here and work is ongoing: Hutch Python/Lightpath Device Loading Slowdown Findings/Fixes
Margaret Ghaly Vincent Esposito
Over the past couple of months, the ECS HE team has been making significant progress on the new controls infrastructure designs. The cost estimates for each hutch has been mostly completed and we are approaching PDR-level readiness in many areas. PDR is now planned for September 2022 (exact date TBD).
The team is working on refining the designs and continues to focus on the PDR, collaborating closely with the mech-E leads and Cosylab.
As a new member of ECS, Mitch Cabral has joined the effort and is working on Jama integration, a collaborative web-based tool for reviewing, approving, and exporting system requirements for release. We have begun capturing L2SI Motion and Vacuum System Functional Requirement Specifications (FRS), and although these requirements are still being reviewed and revised, the hope is to make these documents more accessible for projects to follow, and possibly establish these as standards for our control systems. Transferring to Jama will also allow for agile navigation to find conflicts with upstream or downstream alterations in future project requirements between stakeholders and the team.
We released a major ICD detailing the R2A2 between ECS and other groups in HE. It details many important divisions of work and everyone should at least take a quick look through to better understand what ECS does:
https://slac.sharepoint.com/sites/pub/Publications/LCLSII-HE-1.4-IC-0488.pdf
We've been working to rebuild the lightpath application, a tool that aims to give a high-level summary of the beam, where it's pointing, and which devices are blocking. In order to properly represent the facility, significant changes were made both how lightpath organizes devices and how those devices are represented. As of the writing of this newsletter, we have completed the major infrastructural changes to lightpath, implemented a new device interface, and begun to spot check the app's performance for select end stations.
Design details and FAQ's are being gathered at this page, which (like the lightpath app) is a work in progress.
atef has seen some improvements since our last update:
There is also much work left to be done. Next on our list is a final restructuring of the passive check mechanism. The result of this effort, which is now underway, will allow more flexibility for the user to group their checks in intuitive ways. We will also be integrating dynamic values of a variety of sources into the comparison mechanism, meaning that PV to PV comparisons will be a possibility.
SLAC hosted the May 2022 EPICS Codeathon. There were 3 separate sessions: EPICS core (C/C++), Java tools and extensions, and Python tools and extensions.
On-site and remote participants for the 2022 EPICS Codeathon (Monday, May 9th 2022)
SLAC saw many on-site participants as well as remote ones from 20 institutions. The following charts break down participant sessions and the number of those remote versus on-site:
Track | Total | Onsite | Remote |
Core (C/C++) | 30 | 13 | 17 |
Java | 7 | 2 | 5 |
Python | 16 | 13 | 3 |
Totals | 53 | 28 | 25 |
Andrew Johnson hosted the core team, Kunal Shroff hosted the Java team, and ECS controls engineer Ken Lauer hosted the Python session.
The Python session tracked our results on GitHub and ended up fixing and working on an impressive (if I do say so myself) number of things over the course of a few days:
Project | Contributors | Total Issues | Total PRs | Merged/Resolved |
adl2pydm | 1 | 2 | 2 | 1 |
happi | 1 | 4 | 4 | 3 |
ophyd | 4 | 7 | 7 | 6 |
pmps-ui | 1 | 1 | 1 | 1 |
pyca | 1 | 1 | 1 | 1 |
pydm | 9 | 19 | 19 | 14 |
pythonSoftIoc | 1 | 1 | 0 | 0 |
timechart | 3 | 10 | 10 | 10 |
typhos | 1 | 3 | 2 | 3 |
whatrecord | 1 | 1 | 1 | 1 |
Total | 15 | 49 | 47 | 40 |
Special thanks to all of the participants, on-site and remote, for helping bring the community together and fix/enhance so many projects.
For information on the other sessions, please see the attached summary slides:
The NewALarMSystem (NALMS) is almost ready for deployment. After the last updates on the system, the testing section is almost ready to start. Thanks to Thorsten, Omar, Jesse, Ken, Michael, and Victor for all the effort that they are spending on it.
The GMD and XGMD NALMS will be used for testing purposes. To create a dedicated NALMS for GMD and XGMD, several steps were followed:
The picture shows the first attempt of the Grafana board for NALMS GMD-XGMD.
In the future, more PVs will be added to the NALMS. They will be grouped by subsystems (i.g., vacuum, power, common component, etc). For simplicity, to ensure a good understanding and track of possible faults is essential to include in the alarm list only the PVs that are important for the operation purpose. Therefore it is necessary to keep this list as short as possible. Right now, the most critical PVs are being grouped for each subsystem, in the EBD, and in the FEE area, by the DOE summer student Samara Steinfeld. You can follow the upgrade of the NALMS deployment on the NALMS confluence page.
Mirror: Waters, Nick Vacuum: Jing Yin Tong Ju Motion: Maarten Thomas-Bosum Zachary L Lentz PMPS: Margaret Ghaly Tong Ju Image: Tong Ju Govednik, Janez
Scientists will run dream mirror check out experiment and test all PMPS and veto groups at the same time
TMO had a strange problem back in April: whenever they would move IM4K4 in or out, it would vibrate the stand violently to the point that it would cause a nearby turbo pump to crash, which was extremely disruptive to endstation operations.
Stepper motors are known for having some problems with resonance and high vibrations. Typically in industry you would pick servo motors if extremely smooth motion was a requirement. This isn't a requirement here, but "do not shake the stand when moving" is absolutely a reasonable ask.
All of the PPM imager/power meter combination units were tuned based on a single instance before it was installed onto the beamline. As such, the parameters were optimized for a lab bench in the temporary clean room in B750 rather than for the beamline as-installed. With the rush to close out the installation of the L2SI components, it looks like this had never been revisited.
In general, we expect every instance of a motor to have slightly different tuning needs based on the precise specifications by which the assembly was installed. This won't always be worked on to completion because often there won't be any problems, and because the most important specification for the majority of our motors is position accuracy and precision of the end position, along with motion reliability, which is not covered by this vibration case.
It was observed that the PPM units have higher vibrations than one would expect in the FEE, RIX, and TMO, culminating in this turbo pump incident. So, we took some time to do the following for every installed/active PPM:
After these adjustments, the PPMs are silky smooth, quiet, and reach their destinations reasonably quickly.
ECS added a set of steps to our SDL curation page to give insight into how we would like other teams to work with us to add a new device to the list. We also plan to write up a process for how we'll go about changing an existing device's status. Please take a look at this process to learn about how you can get a device added to the list, and let us know what you think!
ECS Supported Devices Curation
ECS and ME are working on arranging subject matter expertise teams to collaborate on our system architectures and curation of our supported device lists. They will also work on engineering design templates, eg. how to specify and select mechatronic actuators, or arrange a vacuum system to use already existing and tested interlock logic.
LCLS has never had a single-source-of-truth for beamline configuration (the arrangement and state of components in the beamline). This has led more than one incident where either equipment was added to the beamline without notification or proper planning leading to last minute mitigation. Projects also suffer as an updated as-built of the beamline configuration is mythical. In some cases the beamline GUI is the best and only as-built reference.
ECS and ME (you could call us together, LCLS engineering), have been collaborating since early 2022 to develop requirements for a database system to address this issue. We considered a number of options, including using an existing system from the AD. Our requirements aim for a highly collaborative platform, with an excellent API, and ease of use as the key to ensuring an accurate as-built record. Essential information includes x, y, z, functional component name, and status (planned, installed, commissioned, etc.), with the ability to add fields as desired. We plan to link Development started in June and is proceeding. Learn more here.
Antonio Gilardi has joined the ECS Delivery group as the new MFX and UED Point of Contact. | |
Mitchell Cabral has joined the ECS Platforms Development team as a Control System Integrator, whose focus is supporting developing projects (L2HE and MEC-U primarily). Mitchell is a recent undergraduate from CSU, Chico and worked with SLAC for the past year to develop a prototype computer vision system robot for Dr. Diling Zhu in XPP. Some of his hobbies/interest include cooking, the Sacramento Kings, volleyball, rock climbing, (hiking to/swimming in) large bodies of water (with a nice beverage). | |
Divya Thanasekaran has joined the ECS Delivery group as a Staff Software Engineer and the new CXI Controls and Data Point of Contact. Divya has a masters in Computer Engineering from New York University and prior to joining SLAC she was the Lead Device Control Software Engineer for Primary Mirror Control System of the Giant Magellan Telescope. In her term there, she was involved in subsystem specific resource priorities and scheduling. She carried the software from design inception, through Design reviews, testing and test readiness reviews as well as initial successful system test campaigns. Her background is in Embedded Systems, C/C++, Control Software using EtherCAT. She loves to hike, run and is currently learning how to play tennis. | |
Christian Tsoi-A-Sue has joined the ECS Delivery group as a Staff Engineer 1 and his current focus is providing support for CXI as the SEA. He studied robotics engineering and electrical engineering at UC Santa Cruz and graduated in 2019. His areas of interest are embedded systems/microcontrollers, C and Python. In his free time he enjoys playing pickleball, watching movies and playing pokemon go with friends. | |
Josue Zamudio Estrada is a Control and Data Systems Intern on the LCLS Exp. Control Systems Delivery Team. He studied Computer Engineering and recently graduated from UC Santa Cruz. On his free time he likes to skate board, fish, and spending time with friends. | |
Lana Jansen-Whealey has joined the ECS Delivery Group as a long-term intern and is excited to continue learning about hutch instrumentation and software interfacing. She recently graduated from Cal Poly, San Luis Obispo with a physics degree and has been working at SLAC since July 12. She prefers to spend her free time visiting national parks, hiking, doing ballet, cooking, and making new friends! |
We said goodbye and farewell to Maarten in July. He will be missed.
It should be noted a huge quantity of our work is done on Github.com, all development is tracked there. Jira issues capture a significant body of work as well, but at least as much work is also captured in the closure of Github tickets (issues) associated with our various codebases. Unlike Jira, getting a consolidated metric of work done in a past period is not possible without a paid subscription to Github. Roughly speaking over 80 projects were touched since April 8th, with multiple changes of various sizes.
Getting issues...