You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm currently trying to perform retrieval tasks on SCIDOCS benchmark using your embedding model TS-Aspire and OT-Aspire. It seems that the code you provided mentions that the output of TS-Aspire model have both document cls representation and sentence representation. What exact representation should I use to perform document level retrieval as you have done in your paper? Moreover, between L2 loss or cosine similarity, what would be the optimal way to perform document retrieval? It seems that your model was trained based on L2 loss. In addition, in terms of using OT-Aspire for scientific paper retrieval tasks, should I use wasserstein distance to reproduce the results obtained from your paper? Finally, is multi-task trained Aspire(OT + TS) not uploaded on hugging face?
Thank You.
The text was updated successfully, but these errors were encountered:
Hi, I'm currently trying to perform retrieval tasks on SCIDOCS benchmark using your embedding model TS-Aspire and OT-Aspire. It seems that the code you provided mentions that the output of TS-Aspire model have both document cls representation and sentence representation. What exact representation should I use to perform document level retrieval as you have done in your paper? Moreover, between L2 loss or cosine similarity, what would be the optimal way to perform document retrieval? It seems that your model was trained based on L2 loss. In addition, in terms of using OT-Aspire for scientific paper retrieval tasks, should I use wasserstein distance to reproduce the results obtained from your paper? Finally, is multi-task trained Aspire(OT + TS) not uploaded on hugging face?
Thank You.
The text was updated successfully, but these errors were encountered: