Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gradient-based saliency maps to support different activations #6057

Open
wyli opened this issue Feb 23, 2023 · 0 comments · May be fixed by #7070
Open

gradient-based saliency maps to support different activations #6057

wyli opened this issue Feb 23, 2023 · 0 comments · May be fixed by #7070

Comments

@wyli
Copy link
Contributor

wyli commented Feb 23, 2023

class GuidedBackpropGrad(VanillaGrad):
   

    def __call__(self, x: torch.Tensor, index: torch.Tensor | int | None = None, **kwargs) -> torch.Tensor:
        with replace_modules_temp(self.model, "relu", _GradReLU(), strict_match=False):
            return super().__call__(x, index, **kwargs)

I think the method tries to look for "relu" layers and replace it with a grad-hooked customised one. This assumingly might not work with other activation functions. this is a feature request to support different activation functions, SiLU for instance.

Originally posted by @trinhdhk in #6012 (reply in thread)

@JupiLogy JupiLogy linked a pull request Sep 30, 2023 that will close this issue
7 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant