Skip to content

If you are also troubled by hand-tearing code, I can provide you with some simple hand-tearing code cases, I hope it can help you

Notifications You must be signed in to change notification settings

Selen-Suyue/ML-Template

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hand-Tear

If you also have trouble with hand-tearing code, We can provide you with some simple hand-tearing code cases, and hope it can help you! 👻

1. ScaledDotProductAttention fuction SDPA_I.py

  • We abbreviate the fuction by calling it SDPA

$$ Attention(Q,K,V) = softmax( \frac{QK^{T}}{\sqrt{d_{k}}}) $$

2.MultiHeadAttention fuction MHA.py

  • We abbreviate the fuction by calling it MHA
  • firstly, change the single head into multihead
  • secondly, focus on output as a single head
  • thirdly,the final output is obtained by affine transformation

3.stochastic gradient descentSGD_I.py

  • We abbreviate the fuction by calling it SGD

4.backward propagationBP_I.py

  • We abbreviate the fuction by calling it BP
  • Using gradient descent, to update the weight of all, so in the forward propagation its output is more specific

5.k-means examplek-means.py

6.RNN exampleRNN.py

  • We abbreviate the fuction by calling it RNN
  • Defines an RNN model, contains an RNN layer and one output layer.
  • generate_data function: Generate random data and simulate training data.
  • train function:
    • The loss values were collected for each epoch.
    • List training model and return loss value.

About

If you are also troubled by hand-tearing code, I can provide you with some simple hand-tearing code cases, I hope it can help you

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages