본문 바로가기
  • Machine Learning Paper Reviews (Mostly NLP)

Paper review5

Detour for Domain-Specific Tasks Avoiding Adaptive Pre-training KALA: Knowledge-Augmented Language Model Adaptation Minki Kang, Jinheon Baek, Sung Ju Hwang 4 Aug 2022 Abstract 2023. 2. 7.
LUKE: Language Understanding with Knowledge-Based Embeddings LUKE: Deep Contextualized Entity Representation with Entity-aware Self-attention Ikuya Yamade, Akari Asai, etc. 2 Oct 2020 Abstract As the title exposes, this paper, which indicates LUKE, proposes a deeply contextualized entity representations based on bidirectional transformer. Luke has achieved state-of-the-art on five well-known entity related tasks, such as NER (Named Entity Recognition) and.. 2023. 2. 4.
Robustly Optimized BERT Approach RoBERTa: A Robustly Optimized BERT Pretraining Approach Paul G. Allen School of Computing Science & Engineering, University of Washington, Seattle, WA 26 Jul 2019 Abstract RoBERTa (A Robustly Optimized BERT Approach) is a replication study of BERT which enhances the performance of BERT by modifying hyperparameters and training data size. The paper points out training objectives (MLM, NSP) which .. 2023. 1. 26.
A Paradigm Shift for Non-english(Korean) Language Processing KR-BERT: A Small-Scale Korean-Specific Language Model Sangah Lee, Hansol Jang, etc. 11 Aug 2020 Abstract The world’s dominant machine learning model for language processing called Bidirectional Encoder Representation from Transformer (BERT) have also made various task-specific models throughout the history. This paper suggests one of the language model which is also derived from BERT, called KR-.. 2023. 1. 17.