Skip to content

Multi-instance learning heat map attention maps and self-attention matrices #1512

Answered by relyativist
relyativist asked this question in Q&A
Discussion options

You must be logged in to vote

Thanks for reply an the link to previous discussion, however, it’s not clear how attention weight could be achieved by just setting no_head=false, since it performs the transformer encoding and set to false as default. More likely, we can output the weights by printing a, on the attention step:
https://github.com/Project-MONAI/MONAI/blob/ca90628ffbadf37031b06d2506c7938c0a29f25b/monai/networks/nets/milmodel.py#L222

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@relyativist
Comment options

Answer selected by relyativist
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants