Skip to content

Commit

Permalink
Merge pull request #1985 from skleinbo/patch-1
Browse files Browse the repository at this point in the history
deprecations.jl: depwarn -> Base.depwarn
  • Loading branch information
CarloLucibello authored Jun 2, 2022
2 parents b6b3569 + 65adbf4 commit a162245
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
6 changes: 3 additions & 3 deletions docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,17 +21,17 @@ grads = gradient(() -> loss(x, y), θ)
We want to update each parameter, using the gradient, in order to improve (reduce) the loss. Here's one way to do that:

```julia
using Flux.Optimise: update!

η = 0.1 # Learning Rate
for p in (W, b)
update!(p, η * grads[p])
p .-= η * grads[p]
end
```

Running this will alter the parameters `W` and `b` and our loss should go down. Flux provides a more general way to do optimiser updates like this.

```julia
using Flux: update!

opt = Descent(0.1) # Gradient descent with learning rate 0.1

for p in (W, b)
Expand Down
2 changes: 1 addition & 1 deletion src/deprecations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ end
Zeros(args...) = Zeros() # was used both Dense(10, 2, initb = Zeros) and Dense(rand(2,10), Zeros())

function Optimise.update!(x::AbstractArray, x̄)
depwarn("`Flux.Optimise.update!(x, x̄)` was not used internally and has been removed. Please write `x .-= x̄` instead.", :update!)
Base.depwarn("`Flux.Optimise.update!(x, x̄)` was not used internally and has been removed. Please write `x .-= x̄` instead.", :update!)
x .-=
end

Expand Down

0 comments on commit a162245

Please sign in to comment.