[논문 리뷰] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - BERT


[논문 리뷰] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - BERT

이번 게시글에서는 BERT를 제시한 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 논문에 대해 리뷰해보겠다. 원문 링크는 다음과 같다. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT..


원문링크 : [논문 리뷰] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - BERT