v2.1.9 #147
Replies: 5 comments 9 replies
-
You may also need to explicitly filter out tags that don't exist from the response since LLMs aren't all that great at following that type of instruction. |
Beta Was this translation helpful? Give feedback.
-
frankly i haven't expected that feeding all tags and correspondents into ollama will work. Feels like a "shut up" tasks, and it is:
from 14 documents one was tagged and processed, within the last two hours,. but this in a very good quality, but the above seems to have stopped the whole process |
Beta Was this translation helpful? Give feedback.
-
....and i have this "enter username" situation again, without reseting/deleting anything ![]() |
Beta Was this translation helpful? Give feedback.
-
i have this with ollama (too slow...) is there something i can do on ollama and/or paperless-ai side? |
Beta Was this translation helpful? Give feedback.
-
i just discovered, that on every "save settings" task this block gets added: `Return the result EXCLUSIVELY as a JSON object. The Tags and Title MUST be in the language that is used in the document.: " { so there are duplicates of this block, when saved multiple times. Also if someone (like myself) changed the date format via prompt instruction to DD-MM-YYYY this seems a problem then..... |
Beta Was this translation helpful? Give feedback.
-
Full Changelog: v2.1.8...v2.1.9
Added a new Settings option:
Brought back num_ctx (100000) for Ollama.
"Use existing Correspondents and Tags".
If you choose yes, all your tags and correspondents will be added to the prompt for later data matching and providing better consistent results. BUT it will increase your token length and will be costlier and for Ollama much more possible to run into issues.
When you have many preexisting data it will be somtimes the case that the prompt only consists of your data and nothing more. So pay attention!
This discussion was created from the release v2.1.9.
Beta Was this translation helpful? Give feedback.
All reactions