You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe just comparing this idea to Sherlogs.jl which also creates some "FakeFloat"s (called Sherlog16,Sherlog32, Sherlogs64) to log the arithmetic results. So for Sherlogs the information whether it was an add/sub/mul etc is thrown away, but the result is logged (which includes a conversion to Float16, as 64/32-bit bitpattern histograms are just too large). I get fairly similar results
julia>using GFlops, Sherlogs, ShallowWaters
julia>run_model(Float32,Ndays=5);
Starting ShallowWaters on Wed, 14 Apr 202111:27:21 without output.
60% Integration done in0.05s.
julia>run_model(Sherlog32,Ndays=5);
Starting ShallowWaters on Wed, 14 Apr 202111:28:33 without output.
60% Integration done in20.0s.
julia> lb =get_logbook();
julia>sum(lb)
219769755
So there are 219769755 flop done in Float32. Compared to
There's somehow 40k operations missing here (not counting Float64 ops as they are part of the model set up and not counted in Sherlogs), but maybe more importantly note how the speeds are fairly comparable, while Sherlogs include a fairly costly conversion Float32->Float16. So just saying that I support the idea of having some FakeFloats to count the flop - sure it would only work with type-generic functions, but might be considerably faster (the compile time also seems to largely reduced.
Just wondering whether it would be worth implementing the idea prototyped in:
https://discourse.julialang.org/t/vector-matrix-vector-multiplication/57087/15?u=ffevotte
In any case, this issue is here to avoid forgetting about this topic...
The text was updated successfully, but these errors were encountered: