Coming back from the summer shutdown, the XRT HOMS had a couple of issues.

  1. Pitch piezos weren't activating properly.
  2. Gantry differences appeared on M3H and M1H.
  3. Some EPICS PVs were still missing

Fixes and Notes

Piezos weren't working


This is an outstanding HOMS issue. Deadbands govern when the pitch state machine transitions from stepper to piezo. These deadbands are adjustable from EPICS. Their values are reset to zero following a power-cycle of the PLC.

The values have to be restored manually at this point. I wrote a little script and squirreled it away in the iocBoot directory for the xrt-homs IOC.

M1H Stepper Tuning

M1H still wasn't behaving. Really strange because it looked good earlier this summer. I modified the pitch control block to be more specific when logging an error in the coarse move step. Uncovered that SmoothMover was throwing an error, 0x4B07, or move timeout. It seemed the drive deadband was too large, so it was giving up before NC was happy. I can't change that deadband easily (need to connect directly to the drive), so I opted to widen the Target and Position values in the NC to avoid the error for now. Now the absolute move blocks in SmoothMover don't throw errors so the transition to the piezo succeeds. 

I am still quite puzzled by why this cropped up now...

We'll have to tighten the drive deadband at some point. They can get to about a urad accuracy.

Gantry differences

After the power-outage, the system came back, but the drives had errors. When the drives have errors they are not allowed to have any power. Until the errors are clear the system may relax mechanically, leading to small gantry differences.

I could automate this recovery a bit and try a few times to clear errors and recouple. Maybe in another life. M1H had acquired a nice 0.3mm of gantry difference with the power off. Fixing this is simple, I use our DECOUPLE PV backdoor, reduce the gantry difference by hand, and recouple. dusts off hands

I decided it would be worth 10 minutes to accumulate some archiver links, and put them into the HOMS Engineering and Ops Notes page. This makes it easy for us to assess the history of the system.


Missing EPICS PVs

I noticed that M3H gantry difference wasn't present in the archiver and purple on the screen. No good. Checking the st.cmd the culprit turned out to be a failing modbus port setup. Float ports can only handle 125 words, so the PVs trying to access the M3H gantry values were denied because those values were available at addresses > 125...

The fix is to just open another Float port (float port 2), change the PV address offsets, and go on living your life.

M.C. Browne alerted me to this really interested document produced earlier this year at LCLS. Give it a read!

Documentation and Knowledge Retention in LCLS.PDF


The XRT had several disabled axes and nearly broken chassis that were causing all kinds of hackery and finagling for ops as a result of rushed commissioning. To make things better we decided to go through each chassis 1 by 1 and fix outstanding issues, enabling all axes, including bending. This post goes over what was fixed and changed, and why. 


  • Enable all axes
    • Fix all chassis
    • Fix all cables
    • Verify motion
  • Verify all drive settings and capture on google drive
  • Improve gantry tuning and control
  • Misc improvements

Enable all the axes

I made a benchtesting procedure, while verifying the first chassis myself. I then trained Ivan on how to verify chassis on the bench and field. He then worked independently with Danh Du to complete the checkout for M3. Danh then completed the verification for M2 installation with May Ling.

With each verified chassis we had a solid reference for finding mistakes in cables. A loose wire on the bender motor cable for M1 was discovered. Other errors in the chassis were discovered during benchtesting. Sorting out the problems after chassis installation would have been painful (that's what we tried to do originally).

Ivan's Notes:

HOMS Chassis Bench testing Procedure

XRT HOMS M1 work log:

XRT HOMS M3 work log:


Bench test Chassis #1805527 (originally installed in XRT M3)

A dead piezo!

While recommissioning all the XRT HOMS we encountered a piezo failure on M2H. For no apparent reason the piezo died, and we had to scramble to replace it. Actually Corey and Daniele replaced it, while Teddy and I completed verification of all other axes. We don't understand why it failed. Here's the voltage from when it failed:

Improve gantry tuning and control


When moving the axes in gantry a gantry difference would accumulate over time. This would ultimately lead to a parasitic pitch. Each axis would move in-sync pretty well, but it might give up moving before it actually reached a final target, and each drive would give up on its own. This lead to a gantry error that would vary between +/- 10um. Not very good for pointing to 30nrad!


Each gantry axis had a large settling window, and extremely short settling time. By adjusting in the direction of perfection the axes were able to settle a bit more, leading to a more accurate final position. Additionally, by setting the holding current to something other than zero (0.5 A) the axes would also compensate for any other drift and maintain a sharp encoder position. Now the only concern is any heat making its way into the rest of the system. We'll see how that turns out!


After some testing we found that the gantry error was being maintained over several moves to <1um. A significant improvement.

Oops, looks like there is some weird drift there... Turns out when we added some guards for position lag monitoring they tripped the axes off due to some rough handling of the mirror enclosure. Resetting the axis did the trick:

But this means we need to implement the EPICS interface for resetting such faults easily, and diagnosing the problems.


Miscellaneous Improvements

EtherCAT Sync Groups

What are they?

EtherCAT frames have a working counter (notice WC anywhere?) variable that is decremented by 1 for each slave in a sync-group. The idea being, if any of your interfaces on the ethercat network aren't reachable, or there is some other problem, the working counter will be > 0 when it reaches the master. The master then invalidates the entire bus, moving all of the ethercat devices to a non-operational state. Usually if there is a problem with the bus, the slaves downstream of the problem will notice, but any slaves upstream of the problematic one will not know, so the master must tell them something is wrong. That's why we have the working counter. Also the PLC code pays attention to the working counter (when it is programmed well), and state machines will go to a safe state if the WC > 0.

You can configure your ethercat bus to include different slaves in different sync groups. This makes your design robust against functionally separate faults. We can essentially create firewalls in our ethercat bus, so if one branch of our topology has troubles, others remain functional. For the HOMS this means we can assign branch 1 to M1, branch 2 to M2, and branch 3 to M3. If any of the branches have an issue, only that branch will be affected. The other branches will continue to run.

Application to the HOMS

To set this up you have to have a star network. I designed the HOMS PLC to use this topology. Each EK1122 gives two branches apart from the default line. Two EK1122 in the XRT config is overkill but whatever.

Open up the IO tree and look for the SyncUnits item:

Expanding the item shows the sync groups I established:

One for each mirror. Selecting M* shows the devices in the sync group in the left hand pane:

Only devices in <default> can be added to a sync group.

Once this is done the XRT HOMS will be more robust, it would be inexcusable if M1 or M3 took down M2 operations and visa versa.



Test News - News Item 1

this is  a test news item posted 30 March 2008