Replies: 4 comments 17 replies
-
Please give GLM4 a try. Especially for German users, this is the best option IMHO. |
Beta Was this translation helpful? Give feedback.
-
i cannot confirm this. For a document, which was processed fine by gemma, glm4 came up with this correspondent: |
Beta Was this translation helpful? Give feedback.
-
Out of curiosity: Why did you pick such an old version of llama compared to the other models? I would be interested to see the results of llama3.1:8b, especially compared to gemma2:9b as according to Meta it succeeds that model in benchmarks. Maybe you can have a look if you find the time :) |
Beta Was this translation helpful? Give feedback.
-
@mamema are you letting it suggest tags, or are you using existing tags only? If the latter, which models do better at not hallucinating new tags? Also, are you using the default Ollama context window size? |
Beta Was this translation helpful? Give feedback.
-
i was trying to optimize the outcome of tagging and title assignment, so i was thinking what can i do and made some tests:
Additional to the default prompt i had three major requirements: Use the year of the document as tag. Don't use my name within tags or correspondents. Use my language for tags.
Your prompt may don't need those requirements, but for me this was a great way to differentiate the following list of models.
Every model got the same prompt to work with.
Mistral7B:
llama2:7b
Mistral-NeMo
phi4
gemma2:9b
For me, the winner is "gemma2:9b", your mileage may vary.
Beta Was this translation helpful? Give feedback.
All reactions