Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDE PINN solver #897

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open

Conversation

AstitvaAggarwal
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented Dec 3, 2024

As promised this solver will be completed. I got busy in another project so had to put this on hold. Initial reviews on this would be great. The polynomial chaos expansion for SPDEs will be done later.

@ChrisRackauckas
Copy link
Member

That looks to be on the right track.

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented Dec 9, 2024

would sub_batching as added be a good idea? for sub_batch=2 i get
image
and for sub_batch=1 i get
image
the sampled z_i values for the plots are different from the ones used for training, and i use 3 terms of the KKL expansion. the problem being solved is mentioned as in #531 . Further using sol.mean_fit and sol.timepoints got from ensemble mean, timepoints for sub_batch=1 i get
image
and for sub_batch=2 i get
image
subbatch=2 seems to perform better (200 samples taken), but i think i may be comparing both cases incorrectly (im following the mean_fit plots).

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented Dec 10, 2024

So i tried solving again for sub batch=1, 2, 5, the mean_fit error increases as sub batch size increases. The MSE training loss converges smoothly for lower number of sub batches. This could be due a lack of z_i's distribution information being reflected in the dataset (ideal case involves large number of z_i samples). I'm suspecting a NN with probabilistic weights might be better suited to this problem (input t, outputs u), where we have a Random loss function (must choose number of z_i before hand in KKL loss approximation). This would be similar to my BPINN solvers with the exception that we have a stochastic objective. But a doubt arises, would the optimization become too difficult then? or it there any work around to this.

@ChrisRackauckas
Copy link
Member

I would think you'd need sample sizes of at least like 100 to be able to smoothly converge?

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented Dec 22, 2024

Ohh yeah. I'll set up some tests for this. I had tried for like n=10, locally but it was too slow.

@AstitvaAggarwal
Copy link
Contributor Author

AstitvaAggarwal commented Dec 28, 2024

I had tried with sub_batch=2, 5 and 10 but these were not better that the solution of sub_batch=1. Ive added the sub_batch=250 tests, i dont understand why do they fail?. (locally sub_batch=100 solve call takes ~3hrs runtime and fails tests). In case Im testing incorrectly or some code can be sped up, do let me know. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants