Would you consider a DI approach for your OpenAI adapters? #290
Replies: 1 comment 2 replies
-
Hi Adrian! glad to know you like GptCache. With langchain adapter you should be able to use any LLMs as the background, and we are working on supporting LLMs like LLaMa and Dolly by hosting it on local machine. Is that what you are looking for? |
Beta Was this translation helpful? Give feedback.
-
First of all, great work! Love what you're doing here with the library; I'm excited to see this grow and develop.
From the examples provided, I noticed OpenAI's API within your codebase, calls your own instance of
openai
under the hood which you've subclassing and defined interceptors etc.Although it works well, there's a strong preference for users to manage their own dependencies which avoids vendor/package lock-in for numerous reasons including security, maintainability etc.
Would you explore enabling this by implementing some kind of dependency injection pattern to your cache adapter layer?
Keen to hear your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions