Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

However with the yag images, there is very little difference between nobeam and beam:

There a

At first I suspect thought we will would not be able to do much with these the yag screen codewords without more preprocessing. This may be the case for the classification problem of the yag images - I think they are too faint for what vgg16 expects - it was trained on the imagenet color imageswether or not the beam is present, but for the regression problems of find the box around the beam, assuming it is there, it actually does better on yag than vcc.

 

...

Preprocessing

This problem seems harder than the localization for lasing fingers in amo86815. There is more variety in the signal we are trying to find . This leads to different kinds of signal processing pre-filtering of the images. Then sometimes the vgg16 codewords don't seem that homogenous - suggesting.on the vcc. 

Of the 239 samples, 163 of the vcc have a labeled box. Below is a plot where we grab what is inside each box and plot it all in a grid - this is with the background subtraction for file 4. The plot on the left is before, and on the right, is after reducing the 480 x 640 vcc images to (224,224) for vgg16. We used scipy imreduce 'lanczos' to reduce (this calls PIL). Here, there is no preprocessing other than what the image size reduction does

  

Here are the 159 smaples of the yag with a box - here are are using 'lanczos' to reduce from the much larger size of 1040 x 1392 to (224,224). It is interesting to note how the colorbar changes - the range no longer goes up to 320 - I think the 320 values were isolated pixels that get washed out? Or else there is something else I don't understand - we are doing nothing more than scipy.misc.imresize(img,(224,224), interp='lanczos',mode='F') but img is np.uint16 after careful background subtraction  - (going through float32, thresholding at 0 before converting back)

...

For the yag, a 1% overlap is 86%:

...

36 different runs were carried out, varying each of the following:

  • Pre-processing algorithm, one of
    • none
      • just 'lanczos' reduction
    • denoise-log
      • 3 pt median filter
      • log(1+img)
      • 'lanczoz' reduction
      • multiply by scale factor
    • denoise-max-log
      • 3 pt median filter
      • 3 x 3 sum
      • 3 pt median filter
      • 'max_reduce' (save largest pixel value over square)
      • 'lanczoz' reduction (to get final (224,224) size)
      • log(1+img)
      • scale up
  • files, one of
    • just 1,2
    • 1,2,4
  • Do and Don't subtract background for file 4
  • Do and Don't filter out some of the 8192 features with variance <= 0.01 before doing regression

Below is a table of all these results

 

Code Block
collapsetrue
nm=yag eb_alg_none_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.41 th=0.20 acc=0.78 th=0.01 acc=0.86
nm=vcc eb_alg_none_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.14 th=0.20 acc=0.38 th=0.01 acc=0.65
nm=yag eb_alg_denoise-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.66 th=0.20 acc=0.87 th=0.01 acc=0.90
nm=vcc eb_alg_denoise-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.38 th=0.20 acc=0.61 th=0.01 acc=0.72
nm=yag eb_alg_denoise-max-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.46 th=0.20 acc=0.77 th=0.01 acc=0.88
nm=vcc eb_alg_denoise-max-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.41 th=0.20 acc=0.60 th=0.01 acc=0.76
nm=yag eb_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.34 th=0.20 acc=0.68 th=0.01 acc=0.82
nm=vcc eb_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.11 th=0.20 acc=0.34 th=0.01 acc=0.61
nm=yag eb_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.47 th=0.20 acc=0.75 th=0.01 acc=0.89
nm=vcc eb_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.26 th=0.20 acc=0.42 th=0.01 acc=0.56
nm=yag eb_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.34 th=0.20 acc=0.62 th=0.01 acc=0.81
nm=vcc eb_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.28 th=0.20 acc=0.39 th=0.01 acc=0.48
nm=yag eb_subbkg_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.38 th=0.20 acc=0.73 th=0.01 acc=0.86
nm=vcc eb_subbkg_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.08 th=0.20 acc=0.28 th=0.01 acc=0.55
nm=yag eb_subbkg_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.56 th=0.20 acc=0.81 th=0.01 acc=0.92
nm=vcc eb_subbkg_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.23 th=0.20 acc=0.46 th=0.01 acc=0.65
nm=yag eb_subbkg_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.27 th=0.20 acc=0.64 th=0.01 acc=0.84
nm=vcc eb_subbkg_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.26 th=0.20 acc=0.48 th=0.01 acc=0.63
nm=yag eb_varthresh_alg_none_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.41 th=0.20 acc=0.79 th=0.01 acc=0.86
nm=vcc eb_varthresh_alg_none_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.10 th=0.20 acc=0.38 th=0.01 acc=0.63
nm=yag eb_varthresh_alg_denoise-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.67 th=0.20 acc=0.87 th=0.01 acc=0.90
nm=vcc eb_varthresh_alg_denoise-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.37 th=0.20 acc=0.58 th=0.01 acc=0.71
nm=yag eb_varthresh_alg_denoise-max-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.45 th=0.20 acc=0.77 th=0.01 acc=0.88
nm=vcc eb_varthresh_alg_denoise-max-log_f1_f2-regress.h5 inter/union accuracies:  th=0.50 acc=0.40 th=0.20 acc=0.59 th=0.01 acc=0.76
nm=yag eb_varthresh_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.30 th=0.20 acc=0.67 th=0.01 acc=0.82
nm=vcc eb_varthresh_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.10 th=0.20 acc=0.33 th=0.01 acc=0.60
nm=yag eb_varthresh_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.47 th=0.20 acc=0.75 th=0.01 acc=0.89
nm=vcc eb_varthresh_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.24 th=0.20 acc=0.42 th=0.01 acc=0.57
nm=yag eb_varthresh_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.34 th=0.20 acc=0.62 th=0.01 acc=0.81
nm=vcc eb_varthresh_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.28 th=0.20 acc=0.39 th=0.01 acc=0.48
nm=yag eb_varthresh_subbkg_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.35 th=0.20 acc=0.72 th=0.01 acc=0.86
nm=vcc eb_varthresh_subbkg_alg_none_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.07 th=0.20 acc=0.25 th=0.01 acc=0.53
nm=yag eb_varthresh_subbkg_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.54 th=0.20 acc=0.79 th=0.01 acc=0.92
nm=vcc eb_varthresh_subbkg_alg_denoise-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.22 th=0.20 acc=0.44 th=0.01 acc=0.64
nm=yag eb_varthresh_subbkg_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.27 th=0.20 acc=0.65 th=0.01 acc=0.84
nm=vcc eb_varthresh_subbkg_alg_denoise-max-log_f1_f2_f4-regress.h5 inter/union accuracies:  th=0.50 acc=0.26 th=0.20 acc=0.47 th=0.01 acc=0.62

 

Best Result - YAG

The best 1% overlap for the YAG is 92%

...

Best Results 

The best results have been obtained using some signal preprocessing developed by Adbullah Ahmed. The de-noising is roughly:

Image Removed

Best Result - VCC

The best 1% overlap for the VCC is 76%. 

  • It is over files 1,2
  • used denoise-log
  • adding file 4, with subbkg, reduced acc to 63%
  • adding file 4, without subbkg reduced acc to 48%

Image Removed

Third Pass 

Here we did better de-noising and compared to a signal processing approach. The de-noising is:

  • vcc: threshold at 255
  • opencv medianBlur
    • yag: 5pt
    • vcc: 7pt
  • opencv guassianBlur
    • yag: 55 x 55
    • vcc: 15 x 15
  • yag: threshold at 1.5 (where < 1.5, set to 1.5) 
  • lanczos reduction

After doing the de-noising, and before the reduction, we find the maximum value in the image and call it a hit if it is in the labeled box. This signal processing solution performs quite well. Over files 1,2,4 and doing the background subtraction for file 4, it does:

...