[논문 리뷰] Finetuned Language Models Are Zero-Shot Learners


[논문 리뷰] Finetuned Language Models Are Zero-Shot Learners

이번 게시물에서는 기존 LLM을 instruction(지시문)으로 fine-tuning 한 instruction-tuned model, FLAN을 제안한 Finetuned Language Models Are Zero-Shot Learners 논문에 대해 다뤄보겠다. 원문 링크는 아래와 같다. Finetuned Language Models Are Zero-Shot Learners This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of t..


원문링크 : [논문 리뷰] Finetuned Language Models Are Zero-Shot Learners