-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow to use customized GraphRAG settings.yaml #387
Conversation
@ronchengang this works well enough for Indexing process but beware that Retrieval process use different set of params to choose Embedding model
Please also synchronize this setting to make everything seamless. |
Hello, I used your code but got an error: |
@taprosoft done. |
@zzll22 Could you please make sure the "Force reindex file" option is checked and try again? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for your contribution @ronchengang. |
@ronchengang Can this allow me to use the transit API, such as the key of openapi? Can I only use the key of openai? (This is too expensive) |
@ronchengang @taprosoft The latest branch reports an error using ollama+GraphRAG index: |
Description
Please include a summary of the changes and the related issue.
The GraphRAG index generation depends on the settings.yaml. In this file, there are many configurable items, many of which are crucial to the generation of the GraphRAG index, such as llm-related settings, request_timeout, concurrent_requests, etc. However, It is hard to write all these configurations to .env, because there are too many and it is not easy to manage. Although there are a few configurable items in the current .env.example that are ok for people who use OpenAI to complete the GraphRAG index without problem, for others who use private models like Ollama and need more advanced configurations, these few configurations are not enough. Several posts in the current issue list require a more flexible way to use more advanced configurations. This change is to allow users to use self-defined settings.yaml. To achieve this, users need to prepare the settings.yaml.example file by themselves, put it in the root folder, and add a new environment variable named USE_CUSTOMIZED_GRAPHRAG_SETTING in .env file, when its value is true, then this user-provided settings.yaml will be applied during the GraphRAG index process. in this way, use can use self-hosted models like Ollama and customize other configurations.
Fixes [REQUEST] - Can we have a settings.yaml file on the GraphRAG indexing module? #299 Ollama Graph Embedding Fails in Local LLM Setup #283 [REQUEST] - Can graphrag support other free LLMs like qwen2? #245 [BUG] - Ollama OpenAI not working #224 How to set the graphRAG with local ollama #212 and more
Type of change
Checklist