You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Investigating transfer learning, what can we do with a fully trained ImgNet model?

  1. Preparing the data, vggnet takes color images, [0-255] values, small xtcav is grayscale, [0-255].
  2. vggnet subtracts the mean per channel
  3. Codewords are 4096, but look quite spare, not so much variation from lasing/no lasing classes
  4. Still - looks like it discerns, mean for class 0 and class 1 are 16 apart, vs like .3 or 1 for random subsets

 

 

 

  • No labels