If a PV gives you trouble, a first good step is to see if the IOC is up stably and possibly reboot it. For that you will use the IOC manager which has a user guide here and is started via the command iocmanager . This guides describes how to start it and how to it works. More detailed information for commonly used IOCs can be found below. The iocmanager command allows you to find the IOC in question by using the findPV option. Middle-clicking on edm (or pyDM) screens will show the PV name that has issues.
For a given PV you can also determine which server the IOC is running on and if rebooting the IOC does not help you can consider rebooting the server as described in the next paragraph.
The serverStat script as a few different options. serverStat -l will list those. Some of these command are:
serverStat <machine name> # will check the power status and ping the relevant interfaces (controls & DAQ is appropriate) serverStat <machine name> cycle # will power cycle the server, assuming the device as a working ipmi interface |
For power cycling, you will need to be on the same (controls) network as the target machine. psdev has access to all networks, use your powers here responsibly! Be particularly mindful if the machine in question is a recorder machine as the RAID arrays ideally get extra care. serverStat also takes DAQ devices names (aliases) or PVs (only for PVs originating from the same subnet). If ipmi does not work, you can use 'netconfig view <servername>' to find the location information needed for a physical power cycle. Should this location not be correct, please let your controls POC know!
We have a convenience script for controls cameras that can be called from the command line called camViewer . The script should be on the path for <hutch>opr accounts if the currently existing .bashrc is following our common pattern
which camViewer # to test if your environment has it sourced /reg/g/pcds/engineering_tools/<hutch>/scripts # source file path; use if not called in .bashrc camViewer -h # lists available options -c <cam #/cam name>" # option to pass the camera name -m # opens edm expert screen camViewer -c CAM:TST:FEE:01 -m # example to open a certain camera's expert screen |
If no option is passed, you will get a list of cameras to choose from. By default, the camera viewer described here will be opened for the requested camera.
If you would like to record your controls camera in the xtc stream along with the DAQ data, contact your PCDS-POC. The IOC will need to be build with support for timestamps so that images can be assigned to events. After that, they can be added to the DAQ under the "Camera IOCs" section.
Setting hardware binning and ROIs is described here.
GigE camera deployment instructions are found here
The guide to trouble shoot the FEE cameras has many points valid across most cameras.
Should the viewer not work (e.g. not update), open the edm screen and start the simple trouble shooting described. You can reboot the IOC either using the iocmanager or adding "-r" to your camViewer command. " camViewer -c camID -n " will ask the IOC to start acquiring images again: when you put a camera back online, it will often not acquire and needs to be told to. A reboot does the same thing, but with a lot more overhead. If you have a gige object defined in <hutch> python, you can also start the expert screen by using the <gige>.expert_screen() command.
The configuration GUI for your IPIMB or Wave8 boxes can be started using: " ipmConfigEpics < -b boxname >". If you don't pass the "-b" option, you get a list of ipimb and wave8 boxes relevant for your hutch.
A troubleshooting guide for ipimb boxes can be found here: Troubleshooting for Controls IPIMB and Wave8s.
Each hutch's home has a button that will show the health of all gateways. Each hutches gateway is responsible for sending data from that network out to other networks. At the current moment, it will appear white/down in that screen, this tis unfortunate, but normal.
Generally, red is not good. Having the Search Post rate higher than the Search Req rate indicates an issue. You may try resetting a gateway, but that WILL cause issues for other's using it, so please use with care! It will also not solve a problem when some requests simply fill up the queue. The BSA for ACR is a well known culprit and we have tried to move that all to its's own gateway to at least isolate problems. We are also about to move to newer server which will hopefully decrease the frequency of issues here.
Motor screens can be found in various formats at various places in the different hutches. If you are interested in the "expert" screen for most of our motor types, you can use
motor-expert-screen <PVNAME> # Bash command line <motor>.expert_screen() # Hutch python |
This script will bring up the right kind of motor screen (old & new IMS as well as Newport screens).
Newport screens are described here and the current setup for the new XPS-D controllers is described here
Turn OFF the XPS crate before connecting a motor! |
Once everything is connected:
To get them into your hutch python, you either have to:
You can then use "<motor>.pmgr.diff()" to see the difference of current and saved configurations. "<motor>.pmgr.apply_config()" will apply the configuration. Check after applying the config that all parameters "made" it by calling pmgr.diff() on the same motor again. (to be checked)
This works very similar to the smart motor procedure, except that the serial number cannot be used to find the configuration. You will need to pass the configuration name to the "diff" or "apply_config" functions. If you don't, you will be asked. There is a search option where smart string matching is used when you don't know the config name. It is planned to alliviate this issue by labelling the stages with their config name (HXR). Dumb motor recognition does rely on the parameter manager knowing the serial numbers of the dumb motor controllers. Should you have a new chassis or a newly repaired one, this might not work and PCDS personnel will have to fix this.
When running Navitar motors on the new MForce chassis there are two additional settings in the controller MCode that need to be set. D1 and D2 need to be set to 50. This can be done through the :CMD PV. |
The new hutch python has extensive documentation at https://pcdshub.github.io/hutch-python/.
Old system Stubborn Cold Cathodes
Link to the old "Controls User Guide"
LCLS-1 DAQ Tier-1 Troubleshooting