♎
Limited AI
Ctrlk
  • Machine Learning
  • Deep Learning
    • Summary v2
    • Basic Neural Network
    • Basic CNN
    • Advance CNN
    • Basic RNN
    • Advance RNN
    • Attention Mechanisms and Transformers
      • Queries, Keys, and Values
      • Attention is all you need
      • The Transformer Architecture
      • Large-Scaling Pretraning with Transformers
        • BERT vs OpenAI GPT vs ELMo
        • Decoder Model框架
        • Bert vs XLNet
        • T5& GPT& Bert比较
        • 编码器-解码器架构 vs GPT 模型
        • Encoder vs Decoder Reference
      • Transformers for Vision
      • Transformer for Multiomodal
    • NLP Pretraining
  • GenAI
  • Statistics and Optimization
  • Machine Learning System Design
  • Responsible AI
  • Extra Research
Powered by GitBook
On this page
  1. Deep Learning
  2. Attention Mechanisms and Transformers
  3. Large-Scaling Pretraning with Transformers

BERT vs OpenAI GPT vs ELMo

https://dongnian.icu/llm_interview_note/#/01.%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E5%9F%BA%E7%A1%80/NLP%E9%9D%A2%E8%AF%95%E9%A2%98/NLP%E9%9D%A2%E8%AF%95%E9%A2%98

PreviousLarge-Scaling Pretraning with TransformersNextDecoder Model框架

Last updated 1 year ago