Ollama model support #142
Replies: 5 comments 7 replies
-
I've used Mistral:7b and Mistral NeMo which seemed to give better results than Llama3.2. And Qwen2.5 which was in between. They all still create new tags even when instructed not to. I haven't tried it with OpenAI because, well, privacy. I'm guessing it would also ignore the instruction to only use provided tags too, although maybe not as egregiously. |
Beta Was this translation helpful? Give feedback.
-
You can use all models listed on Ollama.com and also all models from huggingface. |
Beta Was this translation helpful? Give feedback.
-
For my first 15 documents "phi4" gives the best result with slightly modified standard template - at least for Correspondent and Title. |
Beta Was this translation helpful? Give feedback.
-
Regarding mistral: Which mistral model are you using? |
Beta Was this translation helpful? Give feedback.
-
For those with lower end system with no GPU or a small GPU, I have been liking the results with the model granite3.1-moe:3b. |
Beta Was this translation helpful? Give feedback.
-
Are the models listed in the description the only ones that can be used with this project? Sorry, I'm new to hosting LLMs.
Beta Was this translation helpful? Give feedback.
All reactions