Investigating transfer learning, what can we do with a fully trained ImgNet model?
- Preparing the data, vggnet takes color images, [0-255] values, small xtcav is grayscale, [0-255].
- vggnet subtracts the mean per channel
- Codewords are 4096, but look quite spare, not so much variation from lasing/no lasing classes
- Still - looks like it discerns, mean for class 0 and class 1 are 16 apart, vs like .3 or 1 for random subsets