Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_with_langchain_docs is broken #764

Open
rolandgvc opened this issue Mar 16, 2024 · 23 comments
Open

generate_with_langchain_docs is broken #764

rolandgvc opened this issue Mar 16, 2024 · 23 comments
Assignees
Labels
bug Something isn't working

Comments

@rolandgvc
Copy link

rolandgvc commented Mar 16, 2024

[x] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug
Running generate_with_langchain_docs gets stuck, showing:

Filename and doc_id are the same for all nodes.
Generating:   0%|                                                                                       | 0/1 [00:00<?, ?it/s]

Ragas version: 0.1.4
Python version: 3.9

Code to Reproduce

import os
import re
from typing import List, Dict, Any
import pandas as pd
from datasets import load_dataset
from langchain.docstore.document import Document
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from langchain_openai import ChatOpenAI, OpenAIEmbeddings


class SyntheticDatasetGenerator:
    def __init__(self, min_content_length: int = 1000) -> None:
        self.min_content_length = min_content_length

    def run(self, data: pd.DataFrame) -> pd.DataFrame:
        filtered_emails = self._filter_and_process_emails(data)

        documents = [
            Document(
                page_content=email["body"],
                metadata={
                    "date": email["date"],
                    "from": email["from"],
                },
            )
            for email in filtered_emails
        ]

        return self._generate_synthetic_dataset(documents)

    def _extract_email_details(self, email_text: str) -> Dict[str, str]:
        # Regular expression patterns for each field
        patterns = {
            "date": r"Date: (.+)",
            "from": r"From: (.+)",
            "to": r"To: (.+)",
        }

        result = {}
        for field, pattern in patterns.items():
            match = re.search(pattern, email_text)
            if match:
                result[field] = match.group(1).strip()

        # Everything after "Subject:" is considered as the body
        body_pattern = r"Subject:.*(?:\n|\r\n?)(.*(?:\n|\r\n?).*)"
        body_match = re.search(body_pattern, email_text, re.DOTALL)
        if body_match:
            result["body"] = body_match.group(1).strip()
        else:
            print(email_text)

        return result

    def _filter_and_process_emails(self, data: pd.DataFrame) -> List[Dict[str, str]]:
        filtered_emails = []
        for _, email in data.iterrows():
            if len(email.text) > self.min_content_length:
                details = self._extract_email_details(email.text)
                filtered_emails.append(details)

        return filtered_emails

    def _generate_synthetic_dataset(self, documents: List[Document]) -> pd.DataFrame:
        generator_llm = ChatOpenAI(model_name="gpt-3.5-turbo")
        critic_llm = ChatOpenAI(model_name="gpt-3.5-turbo")
        embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
        generator = TestsetGenerator.from_langchain(
            generator_llm, critic_llm, embeddings, chunk_size=4096
        )

        testset = generator.generate_with_langchain_docs(
            documents,
            test_size=1,
            distributions={simple: 0.5, reasoning: 0.1, multi_context: 0.4},
            raise_exceptions=True,
        )

        return testset.to_pandas()


if __name__ == "__main__":
    from dotenv import load_dotenv

    load_dotenv()

    os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")

    data = (
        load_dataset("snoop2head/enron_aeslc_emails")["train"]
        .select(range(100))
        .to_pandas()
    )

    df = SyntheticDatasetGenerator().run(data)

Error trace

Filename and doc_id are the same for all nodes.
Generating:   0%|                                                                                       | 0/1 [00:00<?, ?it/s]

Expected behavior
A synthetic dataset should be created.

Additional context
I'm trying to generate a synthetic dataset of questions based on enron emails.

@rolandgvc rolandgvc added the bug Something isn't working label Mar 16, 2024
@jayshah5696
Copy link

Same issue

File "/Users/jshah/anaconda3/envs/hf/lib/python3.10/site-packages/ragas/llms/base.py", line 177, in agenerate_text
result = await self.langchain_llm.agenerate_prompt(
AttributeError: 'LangchainLLMWrapper' object has no attribute 'agenerate_prompt'. Did you mean: 'agenerate_text'?

@shahules786
Copy link
Member

shahules786 commented Mar 16, 2024

Hey @jayshah5696 This is a different issue, your issue is addressed in #762

@shahules786
Copy link
Member

Hey @rolandgvc do you face this issue often? else can you kill the run and try again

@floatcyc
Copy link

Same here with the azure api (python 3.10 and ragas 0.1.4).
I'm following the manual (https://docs.ragas.io/en/latest/howtos/customisations/azure-openai.html#test-set-generation) with two pdf files located in the papers folder and the console remains stuck on "Generating: 0%| ....".
If I include the extra processes "azure_model = LangchainLLMWrapper(azure_model)" and "azure_embeddings = LangchainEmbeddingsWrapper(azure_embeddings)", which I believe should not be included, the error "AttributeError: 'LangchainLLMWrapper' object has no attribute 'agenerate_prompt'. Did you mean: 'agenerate_text'?" is generated.

@shahules786
Copy link
Member

shahules786 commented Mar 17, 2024

@floatcyc as mentioned refer to #762 and don't wrap azure model with langchainllmwrapper
just follow as exactly as here https://docs.ragas.io/en/stable/howtos/customisations/azure-openai.html#test-set-generation

@FiliRezGelly
Copy link

FiliRezGelly commented Mar 18, 2024

I have the same issue with ".py" files...

Ragas version: 0.1.4
Python version: 3.12

Test.py:

from langchain_community.document_loaders import TextLoader
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context

loader = TextLoader("./NYC/NYC.txt", encoding='utf-8')
embedding = OpenAIEmbeddings()

llm = ChatOpenAI(temperature=0, model="gpt-4-0613")

generator = TestsetGenerator.from_langchain(
    generator_llm=llm,
    critic_llm=llm,
    embeddings=embedding
)

distributions = {
    simple: 0.5,
    multi_context: 0.4,
    reasoning: 0.1
}

documents = loader.load()
for document in documents:
    document.metadata['filename'] = document.metadata['source']

testset = generator.generate_with_langchain_docs(documents, 10, distributions)

What is strange to me is that if I run the same code above in a ".ipynb" file it works as a charm

This issue happens everytime I run the script in a .py file, never in a .ipynb file
@shahules786

@cpatrickalves
Copy link

I'm also having the same issue with Azure OpenAI.

I've followed the manual (https://docs.ragas.io/en/latest/howtos/customisations/azure-openai.html#test-set-generation).

Keep getting:

Generating:   0%| `

I've enabled the with_debugging_logs=True on generate_with_langchain_docs and got this:

[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
...

It keeps on this forever ...

@ErikUckert
Copy link

Same issue here.
Can also confirm that same code works with .ipynb
`Filename and doc_id are the same for all nodes.

Generating: 80%|████████████████████████████████████████████████████████▊ | 8/10 [00:08<00:01, 1.39it/s] `

Get stuck at 80% always.
Name: ragas
Version: 0.1.5
Name: langchain
Version: 0.1.13

@AlvinAi96
Copy link

I'm also having the same issue with Azure OpenAI.

I've followed the manual (https://docs.ragas.io/en/latest/howtos/customisations/azure-openai.html#test-set-generation).

Keep getting:

Generating:   0%| `

I've enabled the with_debugging_logs=True on generate_with_langchain_docs and got this:

[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
...

It keeps on this forever ...

I got the same issue. Any suggestions? @shahules786

@ErikUckert
Copy link

I tried it with packing the script in a fastapi app and run it with uvicorn.
For the standard server it also get stuck, then I tried to increase the workers.
Running my app with
uvicorn app:app --workers 4

works like charm. Maybe this is also helpful for you guys because .ipynb in not usefull if you want to integrate it somehow.

@AlvinAi96
Copy link

I tried it with packing the script in a fastapi app and run it with uvicorn. For the standard server it also get stuck, then I tried to increase the workers. Running my app with uvicorn app:app --workers 4

works like charm. Maybe this is also helpful for you guys because .ipynb in not usefull if you want to integrate it somehow.

I also came across the same problem no matter on .ipynb and .py.

@a868111817
Copy link

Same issue...
Always stuck in some point.

@JuliGTV
Copy link

JuliGTV commented Mar 30, 2024

same problem, jsut following the instructions from the quickstart and it got stuck generating at 0%, also ran up a big openai bill which isnt much fun.

@shahules786
Copy link
Member

Hey @JuliGTV Sorry for the trouble man. We are aware of this issue but we have in fact trained a smaller model for you guys to use for free. Please be patient till we can integrate it with ragas.

@JuliGTV
Copy link

JuliGTV commented Apr 1, 2024

no worries.

Fine tuning a small model for this usecase is great idea.

Although I would still like to understand better what went wrong, and if I could have done things differently

I was just trying to follow the quickstart guide, and I kept getting openai ratelimit errors, mostly during the embedding stage.
I tried messing around with the runtime config but nothing seemed to solve it. When I looked at langsmith at least half of all the requests being made to openai were failing.
Then I tried with a smaller document and the following code:

from dotenv import load_dotenv

load_dotenv()

from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter


loader = PyPDFLoader("mini_uth.pdf")
docs = loader.load()


text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 250,
    chunk_overlap = 40,
    length_function = len
)

documents = text_splitter.split_documents(docs)


from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

# generator with openai models
generator_llm = ChatOpenAI(model="gpt-3.5-turbo-16k")
critic_llm = ChatOpenAI(model="gpt-4-1106-preview")
embeddings = OpenAIEmbeddings()

generator = TestsetGenerator.from_langchain(
    generator_llm,
    critic_llm,
    embeddings
)

# generate testset
testset = generator.generate_with_langchain_docs(documents, test_size=10, distributions={simple: 0.5, reasoning: 0.25, multi_context: 0.25})

this time it completed the embedding and then got stuck at 0% at the generation stage.
Additionally at somepoint it hit my opoenai spending limit, and it seems that the error that this triggers is not recognised by the run config, so it just keeps making failed calls forever (which openai apperently still charge you for!)

After I also tried doing it for just a single chunk of a small document, and it still got stuck at 0% generation

@LarsAC
Copy link

LarsAC commented Apr 5, 2024

Any progress ? Also got a couple of runs in a .ipynb today that got stuck at 0% or 80% complete.

I finally trimmed down my document set to two instances and managed to generate 3 test cases in 13mins. Something seems to go wrong under the hood I assume.

@Kevin-JiXu
Copy link

I have the same issue, use AzureOpenAI and stuck in generating 90%.

@zhuweiji
Copy link

zhuweiji commented Apr 17, 2024

This issue is also discussed here. #662

This issue is replicable across various machines and LLM model types .

As others have mentioned, the error seems to be threading related. Here is a stack trace when the generation is stuck.

---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
/tmp/ipykernel_30263/1853206858.py in <cell line: 1>()
----> 1 testset = generator.generate_with_langchain_docs(docs[:5],
      2                                                  test_size=10,
      3                                                  distributions={simple: 0.5, reasoning: 0.4, multi_context: 0.1},
      4                                                 with_debugging_logs=True)

.../python3.8/site-packages/ragas/testset/generator.py in generate_with_langchain_docs(self, documents, test_size, distributions, with_debugging_logs, is_async, raise_exceptions, run_config)
    173         distributions = distributions or {}
    174         # chunk documents and add to docstore
--> 175         self.docstore.add_documents(
    176             [Document.from_langchain_document(doc) for doc in documents]
    177         )

.../python3.8/site-packages/ragas/testset/docstore.py in add_documents(self, docs, show_progress)
    213             for d in self.splitter.transform_documents(docs)
    214         ]
--> 215         self.add_nodes(nodes, show_progress=show_progress)
    216 
    217     def add_nodes(self, nodes: t.Sequence[Node], show_progress=True):

.../python3.8/site-packages/ragas/testset/docstore.py in add_nodes(self, nodes, show_progress)
    250                 result_idx += 1
    251 
--> 252         results = executor.results()
    253         if not results:
    254             raise ExceptionInRunner()

.../python3.8/site-packages/ragas/executor.py in results(self)
    130         executor_job.start()
    131         try:
--> 132             executor_job.join()
    133         finally:
    134             ...

.../python3.8/threading.py in join(self, timeout)
   1009 
   1010         if timeout is None:
-> 1011             self._wait_for_tstate_lock()
   1012         else:
   1013             # the behavior of a negative timeout isn't documented, but

.../python3.8/threading.py in _wait_for_tstate_lock(self, block, timeout)
   1025         if lock is None:  # already determined that the C code is done
   1026             assert self._is_stopped
-> 1027         elif lock.acquire(block, timeout):
   1028             lock.release()
   1029             self._stop()

KeyboardInterrupt: 

@epreisz
Copy link

epreisz commented Apr 20, 2024

Adding parameter is_async=False worked for me on 0.1.7.

generator.generate_with_langchain_docs(documents, test_size=10, distributions={simple: 0.5, reasoning: 0.25, multi_context: 0.25},is_async=False)

Edit: Actually, this was a red herring. Seems like the key to getting this to work was running in debug and stepping through some of the code which I presume is somehow preventing the deadlock.

@adhsay
Copy link

adhsay commented May 7, 2024

any help on this probably?for me even after adding is_async=False , it is stuck at Generating: 0%|.Would be helpful to get some solution to this.Thanks in advance.

@FelipePKest
Copy link

I managed to make this work using OpenAI's gpt-3.5-turbo-16k. However, I'm trying to create the dataset using Llama3 running on LMStudio, and I'm getting the same stuck errror. Any advances on this?

@matheusft
Copy link
Contributor

matheusft commented Jun 20, 2024

Also having the same problem. The code is getting stuck at generate_with_langchain_docs

Filename and doc_id are the same for all nodes.
Generating:   0%|                                                      | 0/10 [03:41<?, ?it/s]

Python 3.11.1
Ragas 0.1.9
Langchain 0.2.5

from typing import List
from langchain_core.documents.base import Document
from langchain_google_vertexai import VertexAI, VertexAIEmbeddings
from ragas.testset.generator import TestsetGenerator


def create_ragas_rag_benchmarking_dataset(
    llm_generator_model: VertexAI,
    llm_critic_model: VertexAI,
    embeddings_model: VertexAIEmbeddings,
    docs: List[Document],
):

    generator = TestsetGenerator.from_langchain(
        generator_llm=llm_generator_model,
        critic_llm=llm_critic_model,
        embeddings=embeddings_model
    )

    # generate testset
    testset = generator.generate_with_langchain_docs(
        documents=docs,
        test_size=10,
        with_debugging_logs=True,
        is_async=False,
        distributions={
            simple: 0.5,
            reasoning: 0.25,
            multi_context: 0.25
        }
    )

    return testset

@zongzi531
Copy link

I'm also having the same issue with Azure OpenAI.

I've followed the manual (https://docs.ragas.io/en/latest/howtos/customisations/azure-openai.html#test-set-generation).

Keep getting:

Generating:   0%| `

I've enabled the with_debugging_logs=True on generate_with_langchain_docs and got this:

[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 3.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 0.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 4.0}
[ragas.testset.evolutions.INFO] retrying evolution: 0 times
[ragas.testset.filters.DEBUG] node filter: {'score': 1.0}
...

It keeps on this forever ...

Any one fixed this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests