Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why use single head and do not use positional embedding in Attentive module #7

Open
dzpdzpdzp opened this issue Nov 5, 2018 · 2 comments

Comments

@dzpdzpdzp
Copy link

请问一下作者,您这里为什么在Attentive module只是用单头,而且没有使用位置编码。并且用于连接的FFN也设置隐藏层数目也设置成为了和词向量一样的维度。

另外就是还有一个问题在交叉注意力中,
Ui= AttentiveModule(U, R, R),Rl = AttentiveModule(R,U, U)
这个公式的设计大概是一个怎样的想法

刚刚那个问题一不小心手误删掉了,表示尴尬。
麻烦作者解答一下谢谢啦。

@xyzhou-puck
Copy link
Collaborator

xyzhou-puck commented Nov 5, 2018 via email

@dzpdzpdzp
Copy link
Author

感谢您的回复,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants