You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The environment tab in its current state (for setting environment variables) is not the most intuitive. Instead of just asking for a base URL, it should have a provider dropdown select and an option to override the base URL if necessary. Then based on the provider selected the environment variables can be more dynamic so it isn't as confusing.
For example, if you select OpenRouter for your LLM, you still need to use a different provider for the embedding model since OpenRouter doesn't have embedding models. But if you choose OpenAI or Ollama, you can use it as the provider for both the LLM and embedding model.
User Impact
This will make the setup for Archon make more sense for everyone.
Implementation Details (optional)
Probably going to separate the provider selection for the LLM and embedding model. I'll have to think on this more. But basically the user will first select their provider, and then the first of the environment variable options related to the LLM/embeddings will show up with hints on how to get it set up specifically for the provider selected.
The text was updated successfully, but these errors were encountered:
Describe the feature you'd like and why
The environment tab in its current state (for setting environment variables) is not the most intuitive. Instead of just asking for a base URL, it should have a provider dropdown select and an option to override the base URL if necessary. Then based on the provider selected the environment variables can be more dynamic so it isn't as confusing.
For example, if you select OpenRouter for your LLM, you still need to use a different provider for the embedding model since OpenRouter doesn't have embedding models. But if you choose OpenAI or Ollama, you can use it as the provider for both the LLM and embedding model.
User Impact
This will make the setup for Archon make more sense for everyone.
Implementation Details (optional)
Probably going to separate the provider selection for the LLM and embedding model. I'll have to think on this more. But basically the user will first select their provider, and then the first of the environment variable options related to the LLM/embeddings will show up with hints on how to get it set up specifically for the provider selected.
The text was updated successfully, but these errors were encountered: