Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: src/pipelines/retrieval/retrieval.py table_contents #1336

Open
lsky-walt opened this issue Feb 25, 2025 · 0 comments
Open

Error: src/pipelines/retrieval/retrieval.py table_contents #1336

lsky-walt opened this issue Feb 25, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@lsky-walt
Copy link

Describe the bug
When ai-service is initialized, table_contents error is reported

To Reproduce
Steps to reproduce the behavior:

  1. Use wren-launcher-linux to start and set custom
  2. docker logs -f wrenai-wren-ai-service-1

Expected behavior
success?

Screenshots

Image

Desktop (please complete the following information):

  • OS: macos
  • Browser chrome:133

Wren AI Information
WREN_PRODUCT_VERSION=0.15.3
WREN_ENGINE_VERSION=0.13.1
WREN_AI_SERVICE_VERSION=0.15.17
IBIS_SERVER_VERSION=0.13.1
WREN_UI_VERSION=0.20.1
WREN_BOOTSTRAP_VERSION=0.1.5

Additional context
Use Alibaba Cloud API, compatible with OpenAI

Relevant log output

# you should rename this file to config.yaml and put it in ~/.wrenai
# please pay attention to the comments starting with # and adjust the config accordingly

type: llm
provider: litellm_llm
models:
  # put OPENAI_API_KEY=<random_string> in ~/.wrenai/.env
  - api_base: https://dashscope.aliyuncs.com/compatible-mode/v1 # change this to your ollama host, api_base should be <ollama_url>/v1
    api_key_name: LLM_OLLAMA_API_KEY
    model: openai/qwen-plus-2025-01-25 # openai/<ollama_model_name>
    timeout: 600
    kwargs:
      n: 1
      temperature: 0

---
type: embedder
provider: litellm_embedder
models:
  # put OPENAI_API_KEY=<random_string> in ~/.wrenai/.env
  - model: openai/text-embedding-v3 # put your ollama embedder model name here, openai/<ollama_model_name>
    api_base: https://dashscope.aliyuncs.com/compatible-mode/v1
    api_key_name: EMBEDDER_OLLAMA_API_KEY
    timeout: 600

---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000

---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 1024 # put your embedding model dimension here
timeout: 120
recreate_index: true

---
# please change the llm and embedder names to the ones you want to use
# the format of llm and embedder should be <provider>.<model_name> such as litellm_llm.gpt-4o-2024-08-06
# the pipes may be not the latest version, please refer to the latest version: https://raw.githubusercontent.com/canner/WrenAI/<WRENAI_VERSION_NUMBER>/docker/config.example.yaml
type: pipeline
pipes:
  - name: db_schema_indexing
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: historical_question_indexing
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: table_description_indexing
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: db_schema_retrieval
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: historical_question_retrieval
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: sql_generation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: sql_correction
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: followup_sql_generation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: sql_summary
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_answer
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: sql_breakdown
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: sql_expansion
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: sql_explanation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: semantics_description
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: relationship_recommendation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: question_recommendation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: question_recommendation_db_schema_retrieval
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: question_recommendation_sql_generation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui
  - name: chart_generation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: chart_adjustment
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: intent_classification
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    embedder: litellm_embedder.openai/text-embedding-v3
    document_store: qdrant
  - name: data_assistance
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_pairs_deletion
    document_store: qdrant
    embedder: litellm_embedder.openai/text-embedding-v3
  - name: sql_pairs_indexing
    document_store: qdrant
    embedder: litellm_embedder.openai/text-embedding-v3
  - name: sql_pairs_retrieval
    document_store: qdrant
    embedder: litellm_embedder.openai/text-embedding-v3
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_pairs_preparation
    document_store: qdrant
    embedder: litellm_embedder.openai/text-embedding-v3
  - name: preprocess_sql_data
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_executor
    engine: wren_ui
  - name: sql_question_generation
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_generation_reasoning
    llm: litellm_llm.openai/qwen-plus-2025-01-25
  - name: sql_regeneration
    llm: litellm_llm.openai/qwen-plus-2025-01-25
    engine: wren_ui

---
settings:
  column_indexing_batch_size: 50
  table_retrieval_size: 10
  table_column_retrieval_size: 100
  allow_using_db_schemas_without_pruning: false # if you want to use db schemas without pruning, set this to true. It will be faster
  query_cache_maxsize: 1000
  query_cache_ttl: 3600
  langfuse_host: https://cloud.langfuse.com
  langfuse_enable: true
  logging_level: DEBUG
  development: true
@lsky-walt lsky-walt added the bug Something isn't working label Feb 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant