[Feature] Local Ollama-Deployed Deepseek R1 14B Output Optimization for Zotero Translation #1086
Open
1 task done
Labels
enhancement
New feature or request
Is there an existing issue for this?
Environment
Describe the feature request
Is your feature request related to a problem? Please describe.
I'm experiencing an issue where the locally deployed Deepseek R1 14B model via Ollama includes chain-of-thought details in its outputs when performing translations for Zotero. This results in excessively long and verbose translation results, which are not suitable for practical use.
Why do you need this feature?
I need this feature to ensure that the translation outputs remain concise and focused solely on the final translated text. Removing the chain-of-thought content will enhance clarity and usability, making the translations more efficient for integration with Zotero.
Describe the solution you'd like
The solution you'd like
I would like an option or configuration within the plugin that allows the suppression or exclusion of chain-of-thought reasoning in the final output. Ideally, the model should produce only the final translation result without any internal reasoning details, ensuring a cleaner and more concise output.
Alternatives you've considered
I attempted to use prompt engineering by setting the necessary instructions in the configuration to suppress the chain-of-thought content, but this approach did not have any effect. Manual post-processing to remove the extra content is inefficient and error-prone, so a built-in solution within the plugin itself is the most reliable option.
Anything else?
No response
The text was updated successfully, but these errors were encountered: