Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Panel
titleOpen a terminal at pslogin or psana, and type:
Code Block
newrel ana-current xpptutorial
cd xpptutorial
ls -lla
less .sit_release
sit_setup

...

Panel
titleTry:
Code Block
none
none
pyxtcreader /reg/d/psdm/xpp/xppi0310/xtc/e81-r0098-s0* | less

Try the same with different verbosity levels:

Code Block
none
none

pyxtcreader -v /reg/d/psdm/xpp/xppi0310/xtc/e81-r0098-s0* | less
pyxtcreader -vv /reg/d/psdm/xpp/xppi0310/xtc/e81-r0098-s0* | less

xtcscanner

Code Block
none
none
xtcscanner -h
usage: xtcscanner [options] xtc-files ...

options:
  -h, --help            show this help message and exit
  -n NDATAGRAMS, --ndatagrams=NDATAGRAMS
  -v, --verbose
  -l L1_OFFSET, --l1-offset=L1_OFFSET
  -e, --epics

...

Panel
titleTry something else:
Code Block
none
none
addpkg XtcExplorer
scons
xtcexplorer /reg/d/psdm/xpp/xppi0310/xtc/e81-r0098-s0*

Now you have a local version of the XtcExplorer package in your directory. That allows you to edit the
source code and customize the analysis modules in the XtcExplorer/src directory.

Exercise for later:
Edit XtcExplorer/src/pyana_ipimb.py to make a loglog plot of channel1 vs channel0.

...

For some more useful analysis examples, in the following we'll stick to writing customized pyana modules and running pyana from the command line.

Extracting the data with pyana, some examples

Outline of a pyana module

Like the other frameworks, pyana is an executable that loops through the XTC file and calls all
requested user modules at certain transitions. All the analysts need to do is to fill in the
relevant functions in their user analysis module:

Toggle Cloak
idkappe
code

...

idkappe

...

Saving data arrays

saving numpy arrays to numpy file

Code Block

import numpy as np

myarray = np.arange(0,100)
np.save( "output_file_name.npy", myarray) 
np.savetxt( "output_file_name.txt", myarray) 

Both of these work with arrays of maximum 2 dimensions. And only one array per file.

saving to MatLab file

Code Block

import scipy.io 

N_array = np.arange(0,100)
x_array = np.random(100)
y_array = np.random(100)
scipy.io.savemat( "output_file_name.mat", mdict={'N': N_array, 'x' : x_array, 'y' : y_array } )

saving to HDF5 file

For the following two examples, check out the latest version of the pyana_examples package:

...


addpkg pyana_examples HEAD
scons

Point detector delay scan

...

Open an editor and save the following in a file named pyana.cfg:

...


[pyana]

modules pyana_examples.xppt_delayscan

[pyana_examples.xppt_delayscan]
controlpv = fs2:ramp_angsft_target
ipimb_norm = XppSb3Ipm-1|Ipimb-0
ipimb_sig = XppSb3Pim-1|Ipimb-0
threshold = 0
outputfile = point_scan_delay.npy

...

Code Block
none
none

pyana -n 200 /reg/d/psdm/XPP/xppi0310/xtc/e81-r0098-s0*

Toggle Cloak
iddelayscan
Highlighting of some code snippets from xppt_delayscan.py:

...

iddelayscan

...

import h5py

ofile = h5py.File("output_file_name.h5",'w')
group = ofile.create_group("MyGroup")
group.create_dataset('delaytime',data=np.array(self.h_delaytime))
group.create_dataset('rawsignal',data=np.array(self.h_ipm_rsig))
group.create_dataset('normsignal',data=np.array(self.h_ipm_nsig))
ofile.close()

For more examples, see How to access HDF5 data from Python and http://code.google.com/p/h5py/Image Added

Interactive analysis with IPython

The version available in our offline release system is
IPython 0.9.1 – An enhanced Interactive Python.
so this is the one I've been using in these examples.
Not a whole lot more than a python shell.

However, the latest IPython has loads of new and interesting features...

http://ipython.org/Image Added

Panel
titleLoading your arrays into (I)Python and plotting interactively:
Code Block

[ofte@psana0106 xpptutorial]$ ipython
Python 2.4.3 (#1, Nov  3 2010, 12:52:40)
Type "copyright", "credits" or "license" for more information.

IPython 0.9.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [1]: from numpy import *

In [2]: from matplotlib.pyplot import *

In [3]: ipm3 = load('point_scan_delay.npy')

In [4]: ipm3.shape
Out[4]: (200, 3)

In [5]: ion()

In [6]: delay = ipm3[:,0]

In [7]: ipmraw = ipm3[:,1]

In [8]: ipmnorm = ipm3[:,2]

n [9]: plot(delay,ipmnorm,'ro')
Out[9]: [<matplotlib.lines.Line2D object at 0x59c4c10>]

In [10]: draw()

In [11]:

Plotting with MatPlotLib

Matplotlib:

  • The plotting can be done directly in the pyana module, but be aware that you need to disable plotting for the
    module to run successfully in a batch job.
    Code Block
    
    import matplotlib.pyplot as plt 
    
    plt.plot(array)
    plt.show()
    
  • Or you can load arrays from a file and interactively plot them in iPython. The same ('recommended') syntax as above can be used, or if you use 'import *' you don't need to prepend the commands with the package name, which is handy when plotting interactively:
    Code Block
    
    from matplotlib.pyplot import *
    
    ion()
    plot(array)
    draw()
    

Related useful tools and links

Non-interactive batch analysis

Pyana jobs are designed to do batch analysis, but matplotlib plotting does not go well with this. If you want your job to produce graphics, make sure to use a matplotlib backend that writes the graphics directly to file, e.g. png files.

Multiprocessing

Pyana can make use of multiple core processing. On the command line, add the option '-p N' where N is the number of cores to use.

Extra care needs to be taken when plotting. Also, output files need to be made with the pyana mkfile command. The output will be merged at the end of the job, but may not be in order. So if you need events to be written to a file in chronological order, you're better off using single core processing.

Extracting the data with pyana, some examples

Outline of a pyana module

Like the other frameworks, pyana is an executable that loops through the XTC file and calls all
requested user modules at certain transitions. All the analysts need to do is to fill in the
relevant functions in their user analysis module:

Toggle Cloak
idkappe
code (pyana outline)

Cloak
idkappe
Panel

For the following two examples, check out the latest version of the pyana_examples package:

Code Block
none
none

addpkg pyana_examples HEAD
scons

Point detector delay scan

Panel

Open an editor and save the following in a file named pyana.cfg:

Code Block
none
none

[pyana]

modules pyana_examples.xppt_delayscan

[pyana_examples.xppt_delayscan]
controlpv = fs2:ramp_angsft_target
ipimb_norm = XppSb3Ipm-1|Ipimb-0
ipimb_sig = XppSb3Pim-1|Ipimb-0
threshold = 0
outputfile = point_scan_delay.npy

Run pyana (start with 200 events):

Code Block
none
none

pyana -n 200 /reg/d/psdm/XPP/xppi0310/xtc/e81-r0098-s0*

Toggle Cloak
iddelayscan
Highlighting of some code snippets from xppt_delayscan.py:

Cloak
iddelayscan
  • Fetching the ControlPV information:
    ControlPV is available from the env object, and since it only changes at the beginning
    of each calibration cycle, the begincalibcycle function is the appropriate place to get it:
    none
    The ControlConfig object may contain several pvControl and pvMonitor objects. In this case
    there's only one, but make sure the name matches anyway: none
  • Fetching the IPIMB and PhaseCavity information:
    All the other information that we need, is available through the evt object, and
    event member function is the place to get it:
    none
    Use "XppSb3Ipm-1|Ipimb-0" (a.k.a. IPM3) sum of all channels for normalization and filtering
    none
    Use "XppSb3Pim-1|Ipimb-0" (a.k.a. PIM3) channel 1 as signal
    none
    Get the phase cavity:
    none
    Compute delay time and fill histograms
    none

Image peak finding

Here are a collection of useful algorithms for image analysis: http://docs.scipy.org/doc/scipy/reference/ndimage.htmlImage Added

This particular example is done with a CSPad image, but only a single section is available. For more typical CSPad module, see next section.

  • For each event, fetch the CsPad information, and get the image array:
    Code Block
    none
    none
    
       def event( self, evt, env ) :
    
            elements = evt.getCsPadQuads(self.source, env)
            image = elements[0].data().reshape(185, 388)
    

  • Select a region of interest. If none is given (optional module parameter), set RoI to be the whole image.
    Code Block
    none
    none
    
            # Region of Interest (RoI

...

Image peak finding

Here are a collection of useful algorithms for image analysis: http://docs.scipy.org/doc/scipy/reference/ndimage.htmlImage Removed

This particular example is done with a CSPad image, but only a single section is available. For more typical CSPad module, see next section.

  • For each event, fetch the CsPad information, and get the image array: Code Blocknonenone
    
       def event( self, evt, env ) :
    
            elements = evt.getCsPadQuads(self.source, env)
            image = elements[0].data().reshape(185, 388)
    
    Select a region of interest. If none is given (optional module parameter), set RoI to be the whole image. Code Blocknonenone
    
            # Region of Interest (RoI)
            if self.roi is None: 
                self.roi = [ 0, image.shape[1], 0, image.shape[0] ] # [x1,x2,y1,y2]
    
            print "ROI   [x1, x2, y1, y2] = ", self.roi
    
    Using only the RoI subset of the image, compute center-of-mass using one of the SciPy.ndimamge algorithms. Add to it the position of the RoI to get the CMS in global pixel coordinates: Code Blocknonenone
    
            roi_array = image[self.roi[2]:self.roi[3],self.roi[0]:self.roi[1]]
            cms = scipy.ndimage.measurements.center_of_mass(roi_array)
            print "Center-of-mass of the ROI: (x, y) = (%.2f, %.2f)" %(self.roi[0]+cms[1],self.roi[2]+cms[0])
    
    Here's an example how you can make an interactive plot and select the Region of Interest with the mouse. Here we plot the image in two axes (subpads on the canvas). The first will always show the full image. In the second axes, you can select a rectangular region in "Zoom" mode (click on the Toolbar's Zoom button). The selected region will be drawn on top of the full image to the left, while the right plot will zoom into the selected region: Code Blocknonenone
    
           fig = plt.figure(1,figsize=(16,5))
            axes1 = fig.add_subplot(121)
            axes2 = fig.add_subplot(122)
    
            axim1 = axes1.imshow(image)
            axes1.set_title("Full image")
    
            axim2 = axes2.imshow(roi_array, extent=(self.roi[0],self.roi[1],self.roi[3],self.roi[2]))
            axes2.set_title("Region of Interest")
            
            # rectangular ROI selector
            rect = UpdatingRect([0, 0], 0, 0, facecolor='None', edgecolor='red', picker=10)
            rect.set_bounds(*axes2.viewLim.bounds)
            axes1.add_patch(rect)
            
            # Connect for changing the view limits
            axes2.callbacks.connect('xlim_changed', rect)
            axes2.callbacks.connect('ylim_changed', rect)
    
    To compute the center-of-mass of the selected region, revert back to non-zoom mode (hit the 'zoom' button again) and click on the rectangle. The rectangle is connected to the 'onpick' function which updates self.roi and computes the center-of-mass:
    Code Block
    nonenone
    
            def onpick(event):
                xrange = axes2.get_xbound()
            if self.roi is  yrange = axes2.get_ybound()None: 
                self.roi = [ xrange[0], xrangeimage.shape[1], yrange[0], yrange[1]]
    
                roi_array = image[self.roishape[2]:self.roi[3],self.roi[0]:self.roi[1]]0] ] # [x1,x2,y1,y2]
    
            print "ROI   cms[x1, x2, y1, y2] = ", scipy.ndimage.measurements.center_of_mass(roi_array)
    self.roi
    

  • Using only the RoI subset of the image, compute center-of-mass using one of the SciPy.ndimamge algorithms. Add to it the position of the RoI to get the CMS in global pixel coordinates:
    Code Block
    none
    none
                print "Center-of-mass of the ROI: (x, y) = (%.2f, %.2f)" % (roi_array = image[self.roi[0]+cms[12]:self.roi[3],self.roi[2]+cms[0])
    
    0]:self.roi[1]]
            cms = figscipy.ndimage.canvasmeasurements.mplcenter_of_connectmass('pick_event', onpickroi_array)
            
            plt.draw()
    

CSPad images and tile arangements

CSPad data structure

CSPad data in xtc is a list of elements. In pyana get the list from the evt (event) object (notice the need for the env (environment) object too!):

...


elements = evt.getCsPadQuads(self.source, env)
  • print "Center-of-mass of the ROI: (x, y) = (%.2f, %.2f)" %(self.roi[0]+cms[1],self.roi[2]+cms[0])
    

  • Here's an example how you can make an interactive plot and select the Region of Interest with the mouse. Here we plot the image in two axes (subpads on the canvas). The first will always show the full image. In the second axes, you can select a rectangular region in "Zoom" mode (click on the Toolbar's Zoom button). The selected region will be drawn on top of the full image to the left, while the right plot will zoom into the selected region:
    Code Block
    none
    none

elements here is a python list of ElementV1 or ElementV2 (or later versions) objects, each representing one quadrant. The list is not ordered, so to know which quadrant you have, you have to check with element.quad(). To store a local array of the whole CSPad detector, you can do the following.

  • In beginjob, find out from the configuration object what part of the CSPad was in use (sometimes sections are missing):
    Code Block
    nonenone
    
    def beginjob ( self, evt, env ) : 
    
           config = env.getConfig(xtc.TypeId.Type.Id_CspadConfig, self.source)
            if not config:
           fig = plt.figure(1,figsize=(16,5))
       print '*** cspad config object isaxes1 missing ***'= fig.add_subplot(121)
            axes2    return
                    = fig.add_subplot(122)
    
            quadsaxim1 = rangeaxes1.imshow(4image)
     
            # memorize this list of sections for later axes1.set_title("Full image")
    
            self.sections = map(config.sections, quads)
    
    In each event, get the current CSPad data:
    Code Block
    nonenone
    
    def event(self, evt, env): 
        elements = evt.getCsPadQuads(self.source,env)
    
        pixel_array = np.zeros((4,8,185,388), dtype="uint16")
        
        for element in elements: axim2 = axes2.imshow(roi_array, extent=(self.roi[0],self.roi[1],self.roi[3],self.roi[2]))
            axes2.set_title("Region of Interest")
            
            # rectangular ROI selector
            rect = UpdatingRect([0, 0], 0, 0, facecolor='None', edgecolor='red', picker=10)
            rect.set_bounds(*axes2.viewLim.bounds)
            axes1.add_patch(rect)
            data
      = element.data() # the 3-dimensional data array# (listConnect offor 2dchanging sections)
    the view limits
          quad = elementaxes2.callbacks.quad() # current quadrant number (integer value)
     
     connect('xlim_changed', rect)
            axes2.callbacks.connect('ylim_changed', rect)
    

  • To compute the center-of-mass of the selected region, revert back to non-zoom mode (hit the 'zoom' button again) and click on the rectangle. The rectangle is connected to the 'onpick' function which updates self.roi and computes the center-of-mass:
    Code Block
    none
    none
    
           # if any sections are missing, insert zeros
    def onpick(event):
                xrange if len( data ) < 8 := axes2.get_xbound()
                zsecyrange = npaxes2.zeros( (185,388), dtype=data.dtype)
    get_ybound()
                self.roi for= i in range (8) :
    [ xrange[0], xrange[1], yrange[0], yrange[1]]
    
                roi_array    if i not in self.sections[quad] := image[self.roi[2]:self.roi[3],self.roi[0]:self.roi[1]]
                        data = np.insert( data, i, zsec, axis=0 cms = scipy.ndimage.measurements.center_of_mass(roi_array)
    
            pixel_array[quad] = data
    

What we have so far gives you a 4d numpy array of all pixels. And if you want to store it in e.g. a numpy array, you can reshape it down to 2 dimensions (this is the format of the official pedestal files made by the translator):

...


pixels = pixel_array.reshape(1480,388)
np.save("pixel_pedestal_file.npy", pixels )

CSPad tile arrangement

To get a rough picture of the full detector, here's an example of how XtcExplorer/src/cspad.py does it:

  • For each Quadrant (cspad_layout.txt):
    Code Block
    nonenone
    
        def get_quad_image( self, data3d, qn) :    print "Center-of-mass of the ROI: (x, y) = (%.2f, %.2f)" % (self.roi[0]+cms[1],self.roi[2]+cms[0])
    
            fig.canvas.mpl_connect('pick_event', onpick)
            """get_quad_image
            plt.draw()
    

CSPad images and tile arangements

CSPad data structure

CSPad data in xtc is a list of elements. In pyana get the list from the evt (event) object (notice the need for the env (environment) object too!):

Code Block
none
none

elements = evt.getCsPadQuads(self.source, env)

elements here is a python list of ElementV1 or ElementV2 (or later versions) objects, each representing one quadrant. The list is not ordered, so to know which quadrant you have, you have to check with element.quad(). To store a local array of the whole CSPad detector, you can do the following.

  • In beginjob, find out from the configuration object what part of the CSPad was in use (sometimes sections are missing):
    Code Block
    none
    none
    
    def beginjob ( self, evt, env ) : 
    Get an image for this quad (qn)
    
            @param data3d           3d data array (row vs. col vs. section)
            @param qn               quad number
            """    
           config pairs = []
            for i in range (8) :
            
           = env.getConfig(xtc.TypeId.Type.Id_CspadConfig, self.source)
         # 1) insert gapif betweennot asicsconfig:
     in the 2x1
             print '*** cspad asicsconfig = np.hsplit( data3d[i], 2)object is missing ***'
                gap = np.zeros( (185,3), dtype=data3d.dtype )
    return
                    #
                # gap should be 3 pixels wide
    quads = range(4)
     
            # pairmemorize = np.hstack( (asics[0], gap, asics[1]) )
    
    this list of sections for later
            self.sections = map(config.sections,  # all sections are originally 185 (rows) x 388 (columns) quads)
    
  • In each event, get the current CSPad data:
    Code Block
    none
    none
    
    def event(self, evt, env): 
        elements = evt.getCsPadQuads(self.source,env)
    
        pixel_array = np.zeros((4,8,185,388), dtype="uint16")
        
     #  Re-orient eachfor sectionelement in theelements: quad
    
            data = element.data() # the 3-dimensional data if i==0 or i==1 :array (list of 2d sections)
            quad = element.quad() # current quadrant number (integer pairvalue)
     = pair[:,::-1].T
        # reverse columns, switch columns# toif rows.any 
    sections are missing, insert zeros
            if i==4 or i==5 len( data ) < 8 :
                zsec =   pair = pair[::-1,:].Tnp.zeros( (185,388), dtype=data.dtype)
       # reverse rows, switch rows to columns
       for i in range (8) :
        pairs.append(  pair )
    
             if i not ifin self.small_angle_tiltsections[quad] :
                       pair data = scipy.ndimage.interpolation.rotate(pair,self.tilt_array[qn][i]np.insert( data, i, zsec, axis=0 )
    
            pixel_array[quad] = data
    

What we have so far gives you a 4d numpy array of all pixels. And if you want to store it in e.g. a numpy array, you can reshape it down to 2 dimensions (this is the format of the official pedestal files made by the translator):

Code Block
none
none

pixels = pixel_array.reshape(1480,388)
np.save("pixel_pedestal_file.npy", pixels )

CSPad tile arrangement

To get a rough picture of the full detector, here's an example of how XtcExplorer/src/cspad.py does it:

  • For each Quadrant (cspad_layout.txt):
    Code Block
    none
    none
    
        def get_quad_image( self, data3d, qn) :
    # make the array for this quadrant
            quadrant = np.zeros( (850, 850), dtype=data3d.dtype )
    
            # insert the 2x1 sections according to
            for sec in range (8):
                nrows, ncols = pairs[sec].shape
    
            """get_quad_image
        # colp,rowp are where theGet top-leftan corner of a section should be placedimage for this quad (qn)
    
            @param data3d   rowp = 850 - self.sec_offset[0] - (self.section_centers[0][qn][sec] + nrows/2)
    3d data array (row vs. col vs. section)
         colp = 850 - self.sec_offset[1] - (self.section_centers[1][qn][sec] + ncols/2)
    @param qn               quad number
            """    
       quadrant[rowp:rowp+nrows, colp:colp+ncols]     pairs = pairs[sec][0:nrows,0:ncols]
    
    
            for i in range return(8) quadrant
    

Then combine all four quadrant images into the full detector image:

...

  • :
            
            

...

  •   

...

  •  

...

  •  # 1) insert gap between asics in the 2x1
            

...

  •    

...

  •  

...

  • asics 

...

  • = 

...

  • np.hsplit( data3d[i], 2)
                

...

  • gap = 

...

  • np.zeros( (185,3), dtype=data3d.dtype )
                

...

  • #
                # gap 

...

  • should 

...

  • be 3 pixels wide
                pair 

...

  • = np.hstack( (asics[0], gap, asics[1]) )
    
                

...

  • # all sections are originally 185 (rows) x 388 (columns) 
                # Re-orient each section 

...

  • in 

...

  • the 

...

  • quad
    
               

...

  •  if i==0 or i==1 :
          

...

Fine tuning

Notice that the code snippets above make use of some predefined quantities which it reads in from "calibration files". The files contains calibrated numerical values for individual sections' and quads' rotations and shifts. All of these files are located in the experiment's 'calib' folder, but is not generated automatically. The XtcExplorer currently has a local version which is not correct but which is close enough to display a reasonable image. For how the files have been read in, you can take a look at XtcExplorer/src/cspad.py's read_alignment function.

For how to find the correct constants for each experiment, look at the CSPad alignment page.

...

Try some plotting of CSPad data using xtcexplorer. Launch the explorer and load xpp48712 run 66 (a dark run):

...


xtcexplorer /reg/d/psdm/XPP/xpp48712/xtc/e153-r0066-s00-c00.xtc
  • *Look through a couple of events, then "Quit Pyana" and edit the configuration file. Add an output file name, and switch to "NoDisplay" and run 100 events to collect an average of dark images.
  • With darks collected, load another file from the same experiment: run 141. Edit the pyana configuration file to use the file you just generated to subtract darks. Run the explorer in "SlideShow" mode again.
  • Change the color scale of the plot by left and right clicking on the colorbar.

Saving data arrays

saving numpy arrays to numpy file

Code Block

import numpy as np

myarray = np.arange(0,100)
np.save( "output_file_name.npy", myarray) 
np.savetxt( "output_file_name.txt", myarray) 

Both of these work with arrays of maximum 2 dimensions. And only one array per file.

saving to MatLab file

Code Block

import scipy.io 

N_array = np.arange(0,100)
x_array = np.random(100)
y_array = np.random(100)
scipy.io.savemat( "output_file_name.mat", mdict={'N': N_array, 'x' : x_array, 'y' : y_array } )

saving to HDF5 file

...


import h5py

ofile = h5py.File("output_file_name.h5",'w')
group = ofile.create_group("MyGroup")
group.create_dataset('delaytime',data=np.array(self.h_delaytime))
group.create_dataset('rawsignal',data=np.array(self.h_ipm_rsig))
group.create_dataset('normsignal',data=np.array(self.h_ipm_nsig))
ofile.close()

For more examples, see How to access HDF5 data from Python and http://code.google.com/p/h5py/Image Removed

Interactive analysis with IPython

The version available in our offline release system is
IPython 0.9.1 – An enhanced Interactive Python.
so this is the one I've been using in these examples.
Not a whole lot more than a python shell.

However, the latest IPython has loads of new and interesting features...

http://ipython.org/Image Removed

Panel
titleLoading your arrays into (I)Python and plotting interactively:
Code Block

[ofte@psana0106 xpptutorial]$ ipython
Python 2.4.3 (#1, Nov  3 2010, 12:52:40)
Type "copyright", "credits" or "license" for more information.

IPython 0.9.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [1]: from numpy import *

In [2]: from matplotlib.pyplot import *

In [3]: ipm3 = load('point_scan_delay.npy')

In [4]: ipm3.shape
Out[4]: (200, 3)

In [5]: ion()

In [6]: delay = ipm3[:,0]

In [7]: ipmraw = ipm3[:,1]

In [8]: ipmnorm = ipm3[:,2]

n [9]: plot(delay,ipmnorm,'ro')
Out[9]: [<matplotlib.lines.Line2D object at 0x59c4c10>]

In [10]: draw()

In [11]:

Plotting with MatPlotLib

Matplotlib:

  • The plotting can be done directly in the pyana module, but be aware that you need to disable plotting for the
    module to run successfully in a batch job.
    Code Block
    
    import matplotlib.pyplot as plt 
    
    plt.plot(array)
    plt.show()
    
  • Or you can load arrays from a file and interactively plot them in iPython. The same ('recommended') syntax as above can be used, or if you use 'import *' you don't need to prepend the commands with the package name, which is handy when plotting interactively:
    Code Block
    
    from matplotlib.pyplot import *
    
    ion()
    plot(array)
    draw()
    

Related useful tools and links

Non-interactive batch analysis

Pyana jobs are designed to do batch analysis, but matplotlib plotting does not go well with this. If you want your job to produce graphics, make sure to use a matplotlib backend that writes the graphics directly to file, e.g. png files.

Multiprocessing

Pyana can make use of multiple core processing. On the command line, add the option '-p N' where N is the number of cores to use.

...

  •           pair = pair[:,::-1].T   # reverse columns, switch columns to rows. 
                if i==4 or i==5 :
                    pair = pair[::-1,:].T   # reverse rows, switch rows to columns
                pairs.append( pair )
    
                if self.small_angle_tilt :
                    pair = scipy.ndimage.interpolation.rotate(pair,self.tilt_array[qn][i])
    
            # make the array for this quadrant
            quadrant = np.zeros( (850, 850), dtype=data3d.dtype )
    
            # insert the 2x1 sections according to
            for sec in range (8):
                nrows, ncols = pairs[sec].shape
    
                # colp,rowp are where the top-left corner of a section should be placed
                rowp = 850 - self.sec_offset[0] - (self.section_centers[0][qn][sec] + nrows/2)
                colp = 850 - self.sec_offset[1] - (self.section_centers[1][qn][sec] + ncols/2)
                
                quadrant[rowp:rowp+nrows, colp:colp+ncols] = pairs[sec][0:nrows,0:ncols]
    
            return quadrant
    

Then combine all four quadrant images into the full detector image:

Code Block
none
none

        self.image = np.zeros((2*850+100, 2*850+100 ), dtype="float64")
        for quad in xrange (4):

            quad_image = self.get_quad_image( self.pixels[quad], quad )
            self.qimages[quad] = quad_image

            if quad>0:
                # reorient the quad_image as needed
                quad_image = np.rot90( quad_image, 4-quad)

            qoff_x = self.quad_offset[0,quad]
            qoff_y = self.quad_offset[1,quad]
            self.image[qoff_x:qoff_x+850, qoff_y:qoff_y+850]=quad_image

        return self.image

Fine tuning

Notice that the code snippets above make use of some predefined quantities which it reads in from "calibration files". The files contains calibrated numerical values for individual sections' and quads' rotations and shifts. All of these files are located in the experiment's 'calib' folder, but is not generated automatically. The XtcExplorer currently has a local version which is not correct but which is close enough to display a reasonable image. For how the files have been read in, you can take a look at XtcExplorer/src/cspad.py's read_alignment function.

For how to find the correct constants for each experiment, look at the CSPad alignment page.

Panel

Try some plotting of CSPad data using xtcexplorer. Launch the explorer and load xpp48712 run 66 (a dark run):

Code Block
none
none

xtcexplorer /reg/d/psdm/XPP/xpp48712/xtc/e153-r0066-s00-c00.xtc
  • *Look through a couple of events, then "Quit Pyana" and edit the configuration file. Add an output file name, and switch to "NoDisplay" and run 100 events to collect an average of dark images.
  • With darks collected, load another file from the same experiment: run 141. Edit the pyana configuration file to use the file you just generated to subtract darks. Run the explorer in "SlideShow" mode again.
  • Change the color scale of the plot by left and right clicking on the colorbar.