You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
[FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
Please do not modify this template :) and fill in all the required fields.
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Background
We are using Milvus as the vector database for Dify, specifically for internal chatbot development. Currently, every time we upload a data file, Dify creates a new collection within the configured Milvus database. However, Milvus has a recommended limit of 10,000 collections.
Given the current behavior of Dify’s Milvus integration, where each uploaded dataset leads to the creation of a new collection, there is a significant risk of hitting this limit over time.
Proposed idea
Instead of creating a new collection for every dataset, Dify should leverage partition keys within a single collection to separate data logically. This approach aligns with Milvus’ recommended practices and avoids the risk of hitting the collection limit.
2. Additional context or comments
No response
3. Can you help us with this feature?
I am interested in contributing to this feature.
The text was updated successfully, but these errors were encountered:
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
Background
We are using Milvus as the vector database for Dify, specifically for internal chatbot development. Currently, every time we upload a data file, Dify creates a new collection within the configured Milvus database. However, Milvus has a recommended limit of 10,000 collections.
Problem
Given the current behavior of Dify’s Milvus integration, where each uploaded dataset leads to the creation of a new collection, there is a significant risk of hitting this limit over time.
Proposed idea
Instead of creating a new collection for every dataset, Dify should leverage partition keys within a single collection to separate data logically. This approach aligns with Milvus’ recommended practices and avoids the risk of hitting the collection limit.
2. Additional context or comments
No response
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: