-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added supervised learning assignment 1.
- Loading branch information
1 parent
9c8d31a
commit fe7aa95
Showing
7 changed files
with
133 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
47 changes: 47 additions & 0 deletions
47
...chs/OMSCS/CS7641-Machine-Learning/Supervised_Learning/7.Comp_learning_theory.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
# Computational learning theory | ||
|
||
We've *run* a learner | ||
|
||
[Mondrain Composition](https://upload.wikimedia.org/wikipedia/commons/a/a4/Piet_Mondriaan%2C_1930_-_Mondrian_Composition_II_in_Red%2C_Blue%2C_and_Yellow.jpg) | ||
[Colored Vornoi Diagram](http://upload.wikimedia.org/wikipedia/commons/2/20/Coloured_Voronoi_2D.svg) Nearest 1-NN | ||
|
||
## Learning theory | ||
|
||
* Define learning problems | ||
* Showing specific algorithm work | ||
* show these problems are fundamentally hard. | ||
|
||
## Resources in machine learning | ||
|
||
Theory of computing analyzes how algorithms use resources: time, space. | ||
|
||
What resources matter in computational learning theory? | ||
|
||
Time, space, **data/samples** | ||
|
||
## Defining inductive learning | ||
|
||
1. Probability of successful training | ||
2. Number of examples to train on | ||
3. Complexity of hypothesis class | ||
4. Accuracy to which target concept is approximated | ||
5. Manners in which training examples presented | ||
6. Manners in which training examples selected | ||
|
||
## Selecting training examples | ||
|
||
Learner / Teachers | ||
|
||
1. Learner asks questions of teacher C(X)? | ||
2. Teacher gives examples to help learner. | ||
1. Teacher chooses X, tells C(x) | ||
3. Fixed distribution | ||
1. x chosen from D by nature | ||
4. Evil worst distribution. | ||
|
||
## Teaching via 20 questions | ||
|
||
## Reconstructing hypothesis | ||
|
||
* Show what's irrelevant | ||
* Show what's relevant |
3 changes: 3 additions & 0 deletions
3
docs/techs/OMSCS/CS7641-Machine-Learning/Supervised_Learning/9.VC_dimensions.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# VC Dimensions | ||
|
||
[Vapnik–Chervonenkis (VC) dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension) |
65 changes: 65 additions & 0 deletions
65
...MSCS/CS7641-Machine-Learning/Unsupervised_Learning/1.randomized_optimization.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
# Randomized optimization | ||
|
||
Input space X | ||
|
||
Objection function (fitness function) f: X->R | ||
|
||
Goal: find | ||
$$ | ||
x^* from X, f(x^*) = max f(x) | ||
$$ | ||
Find the best. | ||
|
||
* Rate finding | ||
* Root finding | ||
* Neural networks x is weights minimize error. | ||
|
||
## Optimization approaches | ||
|
||
Generate & Tests: small input spaces, complex function | ||
|
||
Calculus: function has derivative | ||
|
||
Newton's method: function has derivative, iterative improve -> single optimium | ||
|
||
**what if assumption didn't hold?** | ||
|
||
Big input space, complex function, no derivative ( hard to find), possibly many local optima. | ||
|
||
## Hill climbing | ||
|
||
![UL1_random_hill_climbing.png](../images/UL1_random_hill_climbing.png) | ||
|
||
## Simulated Annealing | ||
|
||
![UL1_simulated_annealing.png](../images/UL1_simulated_annealing.png) | ||
|
||
Metropolis-Hastings | ||
|
||
## Genetic algorithms | ||
|
||
## Summary | ||
|
||
Two problems: | ||
|
||
1. These algorithms didn't remember information. How to capture history? | ||
|
||
2. Simulated annealing uses Boltzmann distribution. How to capture probability distribution? | ||
|
||
e.g. | ||
|
||
* TABU search | ||
|
||
## MIMIC | ||
|
||
* Only points, no structure | ||
* Unclear probability distribution. | ||
|
||
Reference: **MIMIC: Finding Optima by estimating probability densities.** | ||
|
||
* Directly model of probability distribution | ||
* Successfully define model. | ||
|
||
## A probability model | ||
|
||
### MIMIC: a probability model |
Empty file.
Binary file added
BIN
+496 KB
docs/techs/OMSCS/CS7641-Machine-Learning/images/UL1_random_hill_climbing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+381 KB
docs/techs/OMSCS/CS7641-Machine-Learning/images/UL1_simulated_annealing.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.