Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retain Gradients for Input Samples During Explainability Methods #1346

Open
PietroMc opened this issue Sep 13, 2024 · 0 comments
Open

Retain Gradients for Input Samples During Explainability Methods #1346

PietroMc opened this issue Sep 13, 2024 · 0 comments

Comments

@PietroMc
Copy link

Hello Captum team,

I have a question regarding the retention of input sample gradients when using the explainability methods provided by Captum. Specifically, I would like to know if it's possible to retain the gradients of input tensors after running Captum's explainability methods.

Here are the details of my use case:

I am using PyTorch's retain_grad() method or setting requires_grad=True on input tensors to retain their gradients.

My goal is to understand whether the explainability map generated by Captum will still maintain and manipulate the gradients of the input tensors, or if these gradients are removed/not retained in any way during the computation.

Could you provide some insight into how Captum handles gradients of input samples, and whether it is possible to ensure the gradients are retained through the explainability process? I would like to use gradients inside backpropagation to update weights during networks trainings.

Thank you for your support!
Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant