Skip to content

Commit

Permalink
Update notes
Browse files Browse the repository at this point in the history
  • Loading branch information
Jonas1312 committed Sep 19, 2024
1 parent 8e5d554 commit 43ff391
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
6 changes: 4 additions & 2 deletions base/science-tech-maths/machine-learning/machine-learning.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ A machine learning algorithm is an algorithm that is able to learn patterns from
- [Generative models vs discriminative models](#generative-models-vs-discriminative-models)
- [Ensemble methods](#ensemble-methods)
- [Class imbalance](#class-imbalance)
- [Hyperparameter Optimization](#hyperparameter-optimization)
- [Hyperparameter Tuning and Optimization](#hyperparameter-tuning-and-optimization)
- [Gradient descent](#gradient-descent)
- [Momentum](#momentum)
- [Adaptive learning rates](#adaptive-learning-rates)
Expand Down Expand Up @@ -524,7 +524,9 @@ Weighted loss functions vs weighted sampling?

Many papers use the term long-tail learning to refer to class imbalance in multi-class classification tasks, so you can find lots of relevant research under this keyword

## Hyperparameter Optimization
## Hyperparameter Tuning and Optimization

https://developers.google.com/machine-learning/guides/deep-learning-tuning-playbook

- Babysitting: trial and error
- Grid Search: exhaustive search over a grid of hyperparameters
Expand Down
4 changes: 2 additions & 2 deletions base/science-tech-maths/machine-learning/metrics/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@

## Precision-Recall curve vs ROC curve

- Precision: how often the classifier is correct when it predicts positive $PRE = \frac{TP}{TP+FP}$ "Of all the apples I picked from the basket, how many are actually good?"
- Recall: how often the classifier is correct for all positive instances $REC = \frac{TP}{TP+FN}$ "Of all the good apples available, how many did I actually pick?"
- Precision: $PRE = \frac{\text{Relevant retrieved instances}}{\text{All retrieved instances}}= \frac{TP}{TP+FP}$ "Of all predicted positive, how many are actually positive?"
- Recall: $REC = \frac{\text{Relevant retrieved instances}}{\text{All relevant instances}} = \frac{TP}{TP+FN}$ "Of all real positive cases, how many did we predict as positive?"
- [The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0118432)
- Indeed, ROC is useful when evaluating general-purpose classification, while AUPRC is the superior method when classifying rare events.
- <https://towardsdatascience.com/why-you-should-stop-using-the-roc-curve-a46a9adc728>
Expand Down

0 comments on commit 43ff391

Please sign in to comment.