Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • little variation
  • dead neurons
  • still - definite difference at 4000, appears like enough to discriminate 

A linear classifier seems to get 99.5% using these codewords.

Finding the Reference

We are not interested in a good classifier though, our metric is if we can find the right reference for a lasing shot, by using eclidean distance in the codeword space.

Below we take a lasing image, and find the no lasing image with the closest codeword. I looked through about 5 of these, I think they all looked similar. There is a lot of variation among lasing shots, so these images are relatively similar, but definitely not lined up horizontally. 

Image Added

Things to note 

  • We could use data augmentation, to get a better match, that is for each no lasing shot, slide it around horiziontally and maybe a little vertically if need be - one of these would definitely be a better match in this case.

Crazy Ideas

  • You really want a codeword embedding that puts the fingers in a few separate features are orthogonal to the lasing, maybe there is a way to guide the learning to make this so?
  • I see how sparse our representation is, and I think we need PSImageNet - photon science image net - what if we took all the detector images we had, large, small, timetool, cspad, with a 1000 labels of what they are, trained a classifier - that would be a model that could do a lot for transfer learning?