You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you provide the code or more detail into how you zero-shot evaluate Arxiv dataset?
I cannot get a good result when trying the arxiv summarization. I guess it is because I don't know the prompt or the model size is not 7B?
The text was updated successfully, but these errors were encountered:
Thanks for interest in our work! In our paper, the only results we give on arxiv are language modeling perplexity numbers for small models. We do not evaluate LongLLaMA on arxiv summarization downstream task. Note that our model is not instruction tuned, which means that it cannot really do zero-shot summarization. You could try few-shot summarization (not quite sure if a 3B model could really do that), or prompt engineering to match the format of your target document. Also, please stay tuned for the upcoming instruction-tuned models which will definitely be able to do some summarization!
Hi,
Can you provide the code or more detail into how you zero-shot evaluate Arxiv dataset?
I cannot get a good result when trying the arxiv summarization. I guess it is because I don't know the prompt or the model size is not 7B?
The text was updated successfully, but these errors were encountered: