Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dropout layer #194

Merged
merged 34 commits into from
Feb 21, 2025
Merged
Changes from 1 commit
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
e40883b
First stab at dropout; conflict with base type TODO
milancurcic Jan 22, 2025
37aa7a5
Partial dropout integration
milancurcic Jan 23, 2025
820b081
Test uninitialized dropout layer
milancurcic Jan 23, 2025
75ef184
Test dropout state that follows an input layer
milancurcic Jan 23, 2025
796ae74
Enable forward pass for dropout; backward pass TODO
milancurcic Jan 23, 2025
b04d447
Version bump and add dropout to the features table
milancurcic Jan 23, 2025
544b23a
Add dropout to CMake
milancurcic Jan 23, 2025
56dbd52
Enable preprocessing in fpm.toml (needed with recent versions of fpm)
milancurcic Jan 24, 2025
3b5cc27
Small change in scale implementation
milancurcic Jan 24, 2025
703f802
Integration of backward pass for dropout
milancurcic Jan 24, 2025
1dfe6b3
Reduce tolerance in conv2d convergence tests
milancurcic Feb 6, 2025
59cc7e1
Fix bug in dropout scaling
milancurcic Feb 6, 2025
c984b15
disable dropout in inference mode (net % predict); TODO enable in net…
milancurcic Feb 6, 2025
e9772a0
Set dropout's training mode to true in net % train(); add tests
milancurcic Feb 6, 2025
5ae7e9d
WIP dropout tests
milancurcic Feb 16, 2025
d323175
Merge main
milancurcic Feb 17, 2025
0934f7f
Dropout layers always in training mode; except when is called, when …
milancurcic Feb 17, 2025
0f64044
Update the layers table
milancurcic Feb 17, 2025
53b9663
Resolve merge conflicts
milancurcic Feb 18, 2025
aa19f69
Ensure the actual dropout rate == requested dropout rate in most cases
milancurcic Feb 18, 2025
a99d800
Accumulate the gradient in dropout % backward and flush in network % …
milancurcic Feb 20, 2025
ea0012a
Guard against bad dropout rate
milancurcic Feb 20, 2025
0350c7d
Connect the backward pass; expand tests
milancurcic Feb 20, 2025
183e82f
Expand tests
milancurcic Feb 20, 2025
6c07cd7
Use the reference scaling in dropout; don't accumulate gradients beca…
milancurcic Feb 20, 2025
a904c6e
Add dropout to MNIST example; small model changes
milancurcic Feb 20, 2025
35671dd
Add reference
milancurcic Feb 20, 2025
31ebd69
Update print_info dropout
Feb 21, 2025
1cd9e2c
Update print_info
Feb 21, 2025
8961f75
Compute scale once in dropout constructor
milancurcic Feb 21, 2025
ee7fdc9
dropout % backward() doesn't need input from the previous layer
milancurcic Feb 21, 2025
be06f51
Merge pull request #3 from jvdp1/dropout
milancurcic Feb 21, 2025
a542e7c
Merge branch 'dropout' of github.com:milancurcic/neural-fortran into …
milancurcic Feb 21, 2025
a272634
Timing info of dropout
milancurcic Feb 21, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Version bump and add dropout to the features table
milancurcic committed Jan 23, 2025
commit b04d44725a329158b24ebe4363302583308dc77b
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -31,6 +31,7 @@ Read the paper [here](https://arxiv.org/abs/1902.06714).
|------------|------------------|------------------------|----------------------|--------------|---------------|
| Input | `input` | n/a | 1, 3 | n/a | n/a |
| Dense (fully-connected) | `dense` | `input1d`, `flatten` | 1 | ✅ | ✅ |
| Dropout | `dropout` | Any | 1 | ✅ | ✅ |
| Convolutional (2-d) | `conv2d` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 3 | ✅ | ✅(*) |
| Max-pooling (2-d) | `maxpool2d` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 3 | ✅ | ✅ |
| Flatten | `flatten` | `input3d`, `conv2d`, `maxpool2d`, `reshape` | 1 | ✅ | ✅ |
4 changes: 2 additions & 2 deletions fpm.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "neural-fortran"
version = "0.18.0"
version = "0.19.0"
license = "MIT"
author = "Milan Curcic"
maintainer = "milancurcic@hey.com"
copyright = "Copyright 2018-2024, neural-fortran contributors"
copyright = "Copyright 2018-2025, neural-fortran contributors"