-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Venice API #300
Comments
Thank you for filing a feature request.
Have you tried setting
There's a tradeoff for a programmer nonetheless. They would give up their time to build a feature someone else is requesting.
Just took a quick look at venice.ai. I'd need to fund an account with them to get a key. Would you be keen to donate a key to get development going? With interest in this project to get new features, have you considered sponsoring? |
Thank you for your great answer. I will start checking out your suggestion. I would like to go nuts trying to modify for example the OpenAI.el-file but I don't feel totally ready for trying to fork, I'm still trying to get my head around all terms, both when regards to the API, elisp and also the git process in general, but I'll have a crack at it. If stuff breaks I can always reinstall :)
Oh, I totally get that, I am really sorry if I sounded like it was nothing. With hindsight I understand it can be seen that way. I only meant it seems to be very complient with the OpenAI standard, and that for an experiences programmer INTERESTED in wanting to do it for him/herself, it probably wouldnt be a huge task. I certainly did not mean "It's easy for you, so go fix it for me, so I don't have to learn". It was all about giving as much context as I had from my limited understanding of the task at hand.
Both good suggestions. I'll definetly have sponsoring in mind. I'm new to requesting features (or think about it as "suggesting" not "requesting") so again and the social aspect of development (even if I do keep soem dotfiles and small private projects on gitlab), so I'm sorry if I'm asking for too much or in the wrong way. Thank you for your time! :) |
If going down this route, maybe look at chatgpt-shell-openrouter.el instead. It's a very similar scenario (offers OpenAI usage through another provider).
Hey, no worries. I appreciate it. Thank you. Maintaning and evolving this package takes time, so I'm always gratefulfor sponsors. |
Thank you. I'm already knee-deep into this now. I'll see if the openrouter way is easier. Preferably I'd like to add my own package in addition to the openai and openrouter, but I cant seem to grasp how to connect a new "chatgpt-shell-venice.el" to the main package, and it feels like it would break in the next update anyways. Again, just some context, It probably is similar, but Venice doesn't give access to OpenAI-models at all, so it might not be the perfect fit with the openrouter package. It just conforms to the API standards of OpenAI. Rather it, right now, gives access to a few llamas, dolphin, deepseek and qwen. I'll continue to dig into it and break (my copy of) your great package :) |
See chatgpt-shell.el (defun chatgpt-shell--make-default-models ()
"Create a list of default models by combining models from different providers.
This function aggregates models from OpenAI, Anthropic, Google, and Ollama.
It returns a list containing all available models from these providers."
(append (chatgpt-shell-openai-models)
(chatgpt-shell-anthropic-models)
(chatgpt-shell-google-models)
(chatgpt-shell-kagi-models)
(chatgpt-shell-ollama-models)
(chatgpt-shell-perplexity-models)
(chatgpt-shell-openrouter-models))) You can then use While working on things, it helps to Getting streamed responses is a little more difficult than synchronous ones, so maybe start with (setq chatgpt-shell-streaming nil). Hope this helps. Feel free to ask anything else. |
You are very generous with your time and knowledge. I have gotten a hacked version working, by using the OpenAI as a base. I’m going to try to move it into a separate file as you suggest and then try to include Venice specific commands. If I get that far (and it’s going ti take weeks because of lack of free time) I’ll pull and try to push it so you can have a look :) thank you for everything. |
Nice work! |
Hello, I want to phrase this efficiently but also just present myself for a second as to show why I'm asking instead of contributing. I've been using Emacs for around 18 months now and I'm slowly learning a bit of lisp (as you do when you try to configure emacs) but I feel I'm not proficient enough to handle this and I think it would be easy for an experienced programmer.
I would like to be able to connect to the Venice API and use their models privatly. Venice conforms in a high degree to the OpenAI-standard according to themselves so I think small tweaks would be enough.
Venice is great because it obfuscates requests and is an actual private AI cloud service. I think many Emacs-users would approve of the possibility to use models through Venice.
https://docs.venice.ai/api-reference/api-spec?_gl=1*1hiom2c*_gcl_au*NTY2Njk3MjkwLjE3MzczMDQ5NTA.
Thank you for your time. If someone feels they have the know how, but not the time, please point me in the right direction.
The text was updated successfully, but these errors were encountered: