Multi-instance learning heat map attention maps and self-attention matrices #1512
Answered
by
relyativist
relyativist
asked this question in
Q&A
-
Hi, thanks for great tutorial and discussion guides!
Great thanks in advance! |
Beta Was this translation helpful? Give feedback.
Answered by
relyativist
Sep 8, 2023
Replies: 1 comment 1 reply
-
Hi, the 2nd point might be related to Project-MONAI/MONAI#5152 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for reply an the link to previous discussion, however, it’s not clear how attention weight could be achieved by just setting no_head=false, since it performs the transformer encoding and set to false as default. More likely, we can output the weights by printing a, on the attention step:
https://github.com/Project-MONAI/MONAI/blob/ca90628ffbadf37031b06d2506c7938c0a29f25b/monai/networks/nets/milmodel.py#L222