# Summary v2

* NN<br>
  * Perceptron & Two Layer NN<br>
  * hidden layer& activation function& softmax<br>
  * details<br>
    * regularization& dropout<br>
    * initialization<br>
    * batch normalization<br>
  * backpropagation<br>
  * gradient vanish& exploding<br>
* Basic CNN<br>
  * convolution<br>
  * different types of kernel<br>
    * identify<br>
    * smoothing<br>
    * sharpening<br>
    * edge<br>
  * padding& stride<br>
  * layers<br>
    * convolution<br>
    * max/average pooling<br>
    * fully connect<br>
    * Non-linearity and ReLU Layer<br>
* Adavance CNN<br>
  * LeNet<br>
  * AlexNet<br>
  * VGG<br>
  * NiN<br>
  * Multi-Branch Networks(GoogLeNet\&I mageNet)<br>
  * Residual Networks(ResNet) and ResNeXt<br>
  * Densely Connected Networks(DenseNet)<br>
* Basic RNN<br>
  * Seq Model<br>
    * AR<br>
    * Morkov<br>
  * Text to Seq<br>
    * tokenization<br>
    * vocabulary<br>
  * Language Model<br>
    * Markov Model & N-grams<br>
    * Word Frequency<br>
    * Laplace Smoothing<br>
    * Perplexity<br>
    * Partition Sequence(random sampling& sequential partitioning)<br>
  * RNN<br>
    * recurrence formula<br>
    * what is depth & time<br>
      * one to one<br>
      * one to many<br>
      * many to one<br>
      * many to many<br>
      * seq to seq(many to one)<br>
      * seq to seq(many to one + one to many)<br>
    * without hidden state<br>
    * with hidden state<br>
  * backpropagation through time<br>
    * trancate backpropagation through time<br>
* Advance RNN<br>
  * Gated Recurrent Units(GRU)<br>
  * Long Short-Term Memory(LSTM)<br>
  * Bidirectional Recurrent Neural Networks(BRNN)<br>
  * Encoder-Decoder Architecture<br>
  * Seuqence to Sequence Learning(Seq2Seq)<br>
* Attention Mechanism and Transformers<br>
  * Queries, Keys, and Values<br>
  * Attention is all you need<br>
    * Attention and Kernel<br>
    * Attention Scoring Function<br>
    * The Bahdanau Attention<br>
  * Multi-Head Attention<br>
  * Self-Attention<br>
  * The Transformer Architecture<br>
  * Transformers for NLP<br>
  * Transformers for Vision& Multimodal<br>
* LLM Pretraining<br>
  * Word Embedding(word2vec)<br>
  * Approximate Training<br>
  * Word Embedding with Global Vectors(GloVe)<br>
  * Encoder(BERT)<br>
  * Decoding(GPT\&XLNet\&Lamma)<br>
  * Encoder-Decoder(BART& T5)<br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ai.younglimit.com/deep-learning/summary-v2.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
