[NLP][논문리뷰] Distilling the Knowledge in a Neural Network


[NLP][논문리뷰] Distilling the Knowledge in a Neural Network

Distilling the Knowledge in a Neural Network 논문링크: https://arxiv.org/abs/1503.02531 Distilling the Knowledge in a Neural Network A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome arxiv.org Kn..


원문링크 : [NLP][논문리뷰] Distilling the Knowledge in a Neural Network