generated from kyma-project/template-repository
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: implement retry for plan LLM cal #365
Open
muralov
wants to merge
5
commits into
kyma-project:main
Choose a base branch
from
muralov:retry-all-llm-calls
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+1,231
−131
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Note(s) for PR Auther:
Note(s) for PR Reviewer(s):
|
muralov
force-pushed
the
retry-all-llm-calls
branch
2 times, most recently
from
February 4, 2025 10:02
8335b5a
to
fda4de8
Compare
muralov
added
run-integration-test
evaluation requested
A PR with this label will trigger the validation workflow.
labels
Feb 4, 2025
muralov
requested review from
tanweersalah
and removed request for
friedrichwilken
February 4, 2025 13:43
muralov
force-pushed
the
retry-all-llm-calls
branch
2 times, most recently
from
February 5, 2025 16:11
c5dfac9
to
d3d779a
Compare
- Add tenacity library for retry logic in planner invocation - Return erorr message in error field - Create custom SubtasksMissingError exception for handling missing subtasks - Update supervisor agent to use retry decorator with exponential backoff
- Implement table-driven test for _invoke_planner retry behavior
…logic - Create new utils/chain.py module with ainvoke_chain function - Implement retry mechanism for chain invocations using tenacity - Replace direct chain.ainvoke calls across multiple modules with ainvoke_chain - Add logging and error handling for chain invocations - Standardize chain invocation with configurable retry strategy
- Add unit test for shared chain invocation function - Update test assertions for chain invocation with config parameter - Remove redundant test cases in supervisor agent
muralov
force-pushed
the
retry-all-llm-calls
branch
from
February 5, 2025 16:38
d3d779a
to
e835699
Compare
- Update ainvoke_chain to use RunnableSequence and RunnableConfig - Modify response handling in supervisor and reranker to use model instantiation - Adjust type hints and return types for chain invocation utility
muralov
force-pushed
the
retry-all-llm-calls
branch
from
February 6, 2025 05:44
e835699
to
5a682c5
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
cla: yes
evaluation requested
A PR with this label will trigger the validation workflow.
run-integration-test
size/XXL
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Retry if LLM llm invocation fails
Changes proposed in this pull request:
Related issue(s)
#354