This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Evaluation metrics for T5? #5325
Unanswered
KaiserWhoLearns
asked this question in
Q&A
Replies: 1 comment
-
Yea, there's no good reason those should be hardcoded. We should do something similar to what we've done in CopyNet: https://github.com/allenai/allennlp-models/blob/db0e21ae116e5d241252e4482bb8b9514645b979/allennlp_models/generation/models/copynet_seq2seq.py#L88-L89 Would you like to make a PR? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For the code piece of T5 I am looking at, it seems like the only evaluation metrics are set to be ROUGE and BLUE, which seems to belong to summarization tasks. What I am currently doing is to copy the file and edit it on my end. I am pretty new to allennlp so just in case that there may be some tricks without changing much code. What may be the most convenient way to customize the metrics?
Beta Was this translation helpful? Give feedback.
All reactions