...
Code Block | ||
---|---|---|
| ||
In [3]: import scipy.io as sio In [4]: labeledimg1 = sio.loadmat('labeledimg1.mat') In [8]: vccImg = labeledimg1['vccImg'] In [18]: vccBox = labeledimg1['vccbox'] # you'll see vccImg and vccBox show up as 1 x 110 arrays of 'object', they are the images and labels for 110 samples # like Siqi says, a box entry is empty if no beam is present, here we get a count of the non empty boxes, or samples with beam In [23]: len([bx for bx in vccBox[0,:] if len(bx)>0]) Out[23]: 80 The first entry with a box is 4, so you can plot like In [24] %pylab In [26]: imshow(vccImg[0,4]) In [27]: bx = vccBox[0,4] In [31]: ymin,ymax,xmin,xmax=bx[0,:] In [32]: plot([xmin,xmin,xmax,xmax,xmin],[ymin,ymax,ymax,ymin,ymin], 'w') |
In which case I see
Data
- Between the two files, there are 142 samples.
- Each sample has a yag, vcc, and box for each
- If there is a non empty box for yag, there is a non empty box for vcc, and vice versa.
- vcc values are in [0,255], and the boxed beam can get quite brite
- yag values go over 1000, I think, but the boxed value is always dim, like up to 14
First Pass
We have to fit the 480 x 640 vcc images, and 1040 x 1392 yag images into 224 x 224 x 3 RBG images.
I thresholed yag at 255, then made grayscale images for each, using a scipy imresize option.
I generated codewords for the yag and vcc. The yag, which has bright beam, shows alot of structure:
These are plotted with a very large aspect ratio, the bottom is the 'nobeam' images.
However with the yag images, there is very little difference between nobeam and beam:
There a
I suspect we will not be able to do much with these codewords without more preprocessing of the yag images - I think they are too faint for what vgg16 expects - it was trained on the imagenet color images.