Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Error: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n' #591

Open
gutama opened this issue Jan 1, 2025 · 1 comment
Open

[BUG] Error: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n' #591

gutama opened this issue Jan 1, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@gutama
Copy link

gutama commented Jan 1, 2025

Description

I use ollama for llm and embedding lightrag, all the test connection is fine.
when I upload text file
it can do chunking and generating embedding
but cannot do entity and relationship extraction

the error was
[GraphRAG] Creating index... This can take a long time.
[GraphRAG] Indexed 0 / 1 documents.
Error: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n'

I don't have any other error info to solve the issue

image

image

Reproduction steps

Adding documents to doc store
indexing step took 0.11500287055969238
GraphRAG embedding dim 1024
Indexing GraphRAG with LLM ChatOpenAI(api_key=ollama, base_url=http://localhos..., frequency_penalty=None, logit_bias=None, logprobs=None, max_retries=None, max_retries_=2, max_tokens=None, model=granite3.1-dense, n=1, organization=None, presence_penalty=None, stop=None, temperature=None, timeout=None, tool_choice=None, tools=None, top_logprobs=None, top_p=None) and Embedding OpenAIEmbeddings(api_key=ollama, base_url=http://localhos..., context_length=None, dimensions=None, max_retries=None, max_retries_=2, model=bge-m3, organization=None, timeout=None)...
Chunking documents: 100%|████████████████████████████████████████████████████| 1/1 [00:00<00:00, 10.77doc/s]
Generating embeddings: 100%|███████████████████████████████████████████████| 4/4 [00:15<00:00,  3.76s/batch]
use_quick_index_mode False
reader_mode default
Chunk size: None, chunk overlap: None
Using reader TxtReader()
Got 0 page thumbnails
Adding documents to doc store
indexing step took 0.10607624053955078
GraphRAG embedding dim 768
Indexing GraphRAG with LLM ChatOpenAI(api_key=ollama, base_url=http://localhos..., frequency_penalty=None, logit_bias=None, logprobs=None, max_retries=None, max_retries_=2, max_tokens=None, model=granite3.1-dense, n=1, organization=None, presence_penalty=None, stop=None, temperature=None, timeout=None, tool_choice=None, tools=None, top_logprobs=None, top_p=None) and Embedding OpenAIEmbeddings(api_key=ollama, base_url=http://localhos..., context_length=None, dimensions=None, max_retries=None, max_retries_=2, model=nomic-embed-text, organization=None, timeout=None)...
Chunking documents: 100%|████████████████████████████████████████████████████| 1/1 [00:00<00:00, 11.83doc/s]
Generating embeddings: 100%|███████████████████████████████████████████████| 4/4 [00:11<00:00,  2.92s/batch]
use_quick_index_mode False
reader_mode default
Chunk size: None, chunk overlap: None
Using reader TxtReader()
Got 0 page thumbnails
Adding documents to doc store
indexing step took 0.11184024810791016
GraphRAG embedding dim 768
Indexing GraphRAG with LLM ChatOpenAI(api_key=ollama, base_url=http://localhos..., frequency_penalty=None, logit_bias=None, logprobs=None, max_retries=None, max_retries_=2, max_tokens=None, model=granite3.1-dense, n=1, organization=None, presence_penalty=None, stop=None, temperature=None, timeout=None, tool_choice=None, tools=None, top_logprobs=None, top_p=None) and Embedding OpenAIEmbeddings(api_key=ollama, base_url=http://localhos..., context_length=None, dimensions=None, max_retries=None, max_retries_=2, model=nomic-embed-tex..., organization=None, timeout=None)...
Chunking documents: 100%|████████████████████████████████████████████████████| 1/1 [00:00<00:00, 11.48doc/s]
Generating embeddings: 100%|███████████████████████████████████████████████| 4/4 [00:11<00:00,  2.92s/batch]

Screenshots

![image](https://github.com/user-attachments/assets/19a684cd-fb7c-432c-a9d8-a86cd353cea3)


![image](https://github.com/user-attachments/assets/7dcd0ba2-4640-4ffe-a783-c2e468bbf80f)

Logs

No response

Browsers

No response

OS

No response

Additional information

image

@gutama gutama added the bug Something isn't working label Jan 1, 2025
@gutama
Copy link
Author

gutama commented Jan 2, 2025

I clean up all the ktem_app_data folder. It is running albeit very long time and didn't extract any relationship.
Is there anything I missed?


Extracting entities from chunks: 100%|████████████████████████████████| 98/98 [3:12:57<00:00, 118.14s/chunk]
Inserting entities: 100%|████████████████████████████████████████████████| 4/4 [00:00<00:00, 210.52entity/s]
Inserting relationships: 0relationship [00:00, ?relationship/s]
2025-01-02T14:19:26.049446Z [warning ] Didn't extract any relationships asctime=2025-01-02 21:19:26,045 lineno=427 message=Didn't extract any relationships module=lightrag
Generating embeddings: 100%|███████████████████████████████████████████████| 1/1 [00:13<00:00, 13.96s/batch]
2025-01-02T14:19:40.017763Z [warning ] You insert an empty data to vector DB asctime=2025-01-02 21:19:40,017 lineno=85 message=You insert an empty data to vector DB module=lightrag

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant