딥러닝

Transfer Learning (전이학습)

집빈지노 2021. 8. 17. 18:12

How to use pre-trained models

- When integrating whole/portion of the pre-trained model,

  a) the weights of the pre-trained can be frozen so that they are not updated as the new model is trained

  b) the weights may be updated during the training of the new model, with a lower learning rate

 

- Summary of usage patterns

  a) Classifier: The pre-trained model is used directly to classify new images.

  b) Standalone Feature Extractor: The pre-trained model, or some portaion of the model, is used to pre-process images and extract relevant features.

  c) Integrated Feature Extractor: The pre-trained model, or some portion of the model, is integrated into a new model, but layers of the pre-trained model are frozen during training.

  d) Weight Initialization: The pre-trained model, or some portion of the model, is integrated into a new model, and the layers of the pre-trained model are trained in concert with the new model.

 

Models for Transfer Learning

Three popular models for image classification:

  • VGG (e.g. VGG16 or VGG19) : consistent and repeating structures
  • GoogLeNet (e.g. InceptionV3) : inception modules
  • Residual Network (e.g. ResNet50) : residual modules

 

'딥러닝' 카테고리의 다른 글

Neural Network Implementation Flow in Tensorflow  (0) 2021.05.28
CNN 기초  (0) 2021.05.25
Introduction to Deep Learning : 딥러닝의 시작  (0) 2021.01.02