Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

My name is Travis Nichols and I am an undergraduate physics major and math minor from California Polytechnic State University - San Luis Obispo, interested in lasers, plasmas, and accelerator physics. I am an experimentalist to my core, and I love being in the lab, pressing buttons, flipping switches, and turning knobs. There is something I find uniquely satisfying about the troubleshooting process of experimental physics, and being at SLAC was a wonderful opportunity for me to gain valuable experience and confirm my passion for this field. I was fortunate enough to be an intern at SLAC in the summer of 2023 where I worked primarily with Robert Ariniello and Spencer Gessner on alignment diagnostics and characterizations characterization systems for the lithium Ionizing Bessel laser used in sector 20 of the linac for Plasma Wakefield Acceleration (PWFA) experiments.

...

Please feel free to reach out to me for questions or if my GUI starts bugging outhas any bugs.

Background For My Project

...

The laser beam travels through quite the journey between its creation and the lithium oven. As you can imagine, this means the laser passes through dozens of mirrors, and beam samplers, and lenses, and ND filters, and so on and so forth. All of these instruments have to be perfectly aligned in order to create our nice plasma. Thankfully, we have installed a bunch of cameras in the beam path so we can check on our laser at different places. Moreover, we installed motors on most of the optics so that we can adjust them remotely. This is the basis for my GUI made in PyDM.

Essentially the GUI grabs the output from a camera and the PV’s for the motors on the corresponding instrument allowing the user to adjust the beam remotely until its aligned. On top of this however, I added some extra functionality. There is a current position finder function that projects the image onto each the X and Y axis and then fits a gaussian over the projected values to locate the maximum coordinates for each. This value is These coordinates are then displayed and a crosshair is drawn where the computer believes to be the center of the beam. This method of finding the maximum coordinates seems to run much faster than the gaussian filter I function found In other places in the SLAC GitHub and certainly runs faster than simply fitting a 2d gaussian. Additionally, now that the program knows the position of the beam, and with a given a target position, I was able to include a suggested correction button that calculates how to adjust the motors based on the current beam position. This is accomplished by finding the difference in pixels between the current and target value for each the x and y components; the difference is the then converted into motor movement using a predetermined conversion factor specific to each camera. After an adjustment is made the computer recalculates the suggested correction. This function works pretty well, but it only gets the beam approximately aligned, which is why I have still included the manual adjustment buttons with the fine, medium, and coarse , adjustment scales. The Final Function final function worth noting is the ROI setter. This button will set the display of the camera to a desired region of interested interest, centered on the target position. This is particularly nice for when the beam is only 50 pixels across but the camera has a 1200x900 pixel display.

This widget is then made into a class that which a user can make instances of in a larger display class, giving the possibility to view and align multiple cameras at once, the current display below shows the beam transport cameras. This is very useful when aligning a beam since you can monitor the beam position on downstream cameras while adjusting upstream cameras to ensure the beam has not disappeared into oblivion forever. As of August 2023, the GUI is working for the Sector 20 laser room cameras, the transport cameras, and many cameras down in the tunnel, but the GUI is designed to work for any camera as long as you feed it the right PVs from a csv file.

...

There is a camera installed in the beam path that views the laser as it emerges from the axicon. If the wavefront is nice and flat we will see the clean bullseye pattern below on the left. But if the incoming beam looks like a saddle or trefoil when the beam reaches the axicon, we get patterns that look like these on the right.  We We can solve these phase issue with a deformable mirror. Essentially the The deformable mirror is made of a somewhat flexible glass and has a bunch of tiny pistons allowing us to change the shape of the mirror surface to impart a corrected phase on an aberrated wavefront .

...

In the path of the beam, after the axicon, we have placed a camera on a movable rail. This allows us to record the intensity of the Bessel beam at several different positions. As of August 2023, we have only acquired one data set of 10 positions with 20 images taken at each of 10 positionsposition. Below is an example of the analysis process. The raw image is shown on the left. Then the The image is scanned for a maximum and smaller ROI is set around the located maximumbullseye, shown in the second image. After this, a two dimensional, zeroth order Bessel function of the first kind squared is fit over the data as shown in the third image. The amplitude of each fit Bessel is stored and plotted at after each image is analyzed. 

...