본문 바로가기
  • Machine Learning Paper Reviews (Mostly NLP)

분류 전체보기22

Robustly Optimized BERT Approach RoBERTa: A Robustly Optimized BERT Pretraining Approach Paul G. Allen School of Computing Science & Engineering, University of Washington, Seattle, WA 26 Jul 2019 Abstract RoBERTa (A Robustly Optimized BERT Approach) is a replication study of BERT which enhances the performance of BERT by modifying hyperparameters and training data size. The paper points out training objectives (MLM, NSP) which .. 2023. 1. 26.
A Paradigm Shift for Non-english(Korean) Language Processing KR-BERT: A Small-Scale Korean-Specific Language Model Sangah Lee, Hansol Jang, etc. 11 Aug 2020 Abstract The world’s dominant machine learning model for language processing called Bidirectional Encoder Representation from Transformer (BERT) have also made various task-specific models throughout the history. This paper suggests one of the language model which is also derived from BERT, called KR-.. 2023. 1. 17.
Attention is All You Need Attention Is All You Need Ashish Vaswani, Noam Shazeer etc. Dec 6, 2017 Abstract The paper suggests an unique machine learning architecture for natural language processing called Transformer. Thisarchitecture is solely based on attention algorithm, dispensing with former network architectures such as RNN (Recurrent Neural Network) and CNN (Convolution Neural Network). Despite of the difference, .. 2023. 1. 5.
Google : The Most Resilient System for the Most Intricately Flawed Web Artifact The PageRank Citation Ranking : Bringing Order to Web [BP] Sergey Brin and Larry Page. Google search engine. January 29, 1998 Google Some might think that the expression “the most intricately flawed web artifact” would be too offensive. However, this is the expression that the paper uses to explain what the “web structure” is. The world best search engine, known as google, had made this artifact.. 2022. 10. 2.