-
-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making the package more maintainable #900
Comments
Agreed with all of these. |
I can help out a bit. For 1 - GPU Support for NNODE, users should provide the model, initial conditions and parameters in gpu arrays in order to not error? Also, initially I thought GPU works with NNODE now, I wanted to confirm it - I was working on a #866 which implemented a custom broadcast, is it still needed? For 2 - Will that fix using autodiff with NNODE? |
|
BNNODE, ahmc_bayesian_pinn_ode, ahmc_bayesian_pinn_pde, BPINNsolution, BayesianPINN, these are the bayesian ones right ?? |
yes |
I can help with this. Shifting would be relatively very fast and easy |
Meanwhile I can also try to analyse and fix other requirements |
I did an initial round of cleanup in #882, but there's a lot of unwanted code that should be purged, and most of the handling should be forwarded to Lux.
adapt
to copy over anything that is not on GPU to GPU on every call to the function. IMO this should be completely removed, and if user calls a model which is partially on GPU then it should be an error (similar to Lux)Phi
/ODEPhi
need to be rewritten as a Lux layer, that un-blocks all current shortcomings with nested AD@closure
to avoid boxingBayesianNeuralPDE
)? I am pretty sure the number of users for those is quite small but those packages add a significant load timerng
instead of relying on the global RNGP.S. Just because I am opening this issue doesn't mean I am taking it upon me to do this 😓
The text was updated successfully, but these errors were encountered: