Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: 429 Too Many Requests #45

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

FortiShield
Copy link

@FortiShield FortiShield commented Mar 1, 2025

Addresses issue: #

Changes proposed in this pull request:

  • Change 1
  • Change 2
  • Change 3

Summary by Sourcery

Improves the crawler's resilience to "429 Too Many Requests" errors by reducing the minimum and maximum retry wait times, and using the default exponential backoff strategy. Additionally, configures a retryable HTTP client with exponential backoff for handling HTTP requests.

Signed-off-by: fortishield <[email protected]>
Copy link

gitstream-cm bot commented Mar 1, 2025

🚨 gitStream Monthly Automation Limit Reached 🚨

Your organization has exceeded the number of pull requests allowed for automation with gitStream.
Monthly PRs automated: 448/250

To continue automating your PR workflows and unlock additional features, please contact LinearB.

Copy link

sourcery-ai bot commented Mar 1, 2025

Reviewer's Guide by Sourcery

This pull request enhances the crawler's resilience to '429 Too Many Requests' errors by implementing more aggressive retry logic. It reduces the retry wait times for the main crawler client and introduces a dedicated retryable HTTP client with exponential backoff for individual GET requests.

Updated class diagram for Crawler

classDiagram
  class Crawler {
    -http: retryablehttp.Client
    +NewCrawler(opt Option) Crawler
    +httpGet(ctx context.Context, url string) (*http.Response, error)
  }

  class retryablehttp.Client {
    -RetryMax: int
    -RetryWaitMin: Duration
    -RetryWaitMax: Duration
    -Backoff: Backoff
    +Do(req *http.Request) (*http.Response, error)
  }

  Crawler -- retryablehttp.Client : uses
Loading

File-Level Changes

Change Details Files
Modified retry logic for HTTP requests to handle 429 errors more effectively.
  • Reduced the minimum and maximum retry wait times in the main crawler client.
  • Changed the backoff strategy in the main crawler client to the default exponential backoff.
  • Introduced a new retryable HTTP client with exponential backoff specifically for individual HTTP GET requests.
pkg/crawler/crawler.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!
  • Generate a plan of action for an issue: Comment @sourcery-ai plan on
    an issue to generate a plan of action for it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Duplicate Client

Creating a new retryable HTTP client for each request in httpGet() is inefficient and may lead to resource issues. Consider reusing the existing client instance that was created in NewCrawler().

client := retryablehttp.NewClient()
client.RetryMax = 5
client.RetryWaitMin = 1 * time.Second
client.RetryWaitMax = 30 * time.Second
client.Backoff = retryablehttp.DefaultBackoff
Inconsistent Settings

The retry settings (RetryMax, RetryWaitMin, RetryWaitMax) are configured differently between NewCrawler and httpGet methods. This inconsistency could lead to unpredictable retry behavior.

client.RetryMax = 5
client.RetryWaitMin = 1 * time.Second
client.RetryWaitMax = 30 * time.Second

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @FortiShield - I've reviewed your changes - here's some feedback:

Overall Comments:

  • The retry logic seems to be duplicated; consider consolidating it into a single place.
  • Consider adding a comment explaining why the retry parameters were changed.
Here's what I looked at during the review
  • 🟢 General issues: all looks good
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Avoid creating new client per request

Creating a new HTTP client for each request is inefficient and may lead to
resource exhaustion. Use the existing client c.http that's already configured
with retry logic.

pkg/crawler/crawler.go [434-441]

-// Set up exponential backoff
-client := retryablehttp.NewClient()
-client.RetryMax = 5
-client.RetryWaitMin = 1 * time.Second
-client.RetryWaitMax = 30 * time.Second
-client.Backoff = retryablehttp.DefaultBackoff
+resp, err := c.http.Do(req)
 
-resp, err := client.Do(req)
-
  • Apply this suggestion
Suggestion importance[1-10]: 9

__

Why: Creating a new HTTP client for each request is a serious performance issue that could lead to resource exhaustion. Using the existing client is the correct approach as it's already configured with proper retry logic.

High
Fix misplaced error check

The error check is placed after the removed line, making it check an undefined
error. Move the error check right after the Do call.

pkg/crawler/crawler.go [441-444]

-resp, err := client.Do(req)
+resp, err := c.http.Do(req)
 if err != nil {
     return nil, xerrors.Errorf("http error (%s): %w", url, err)
 }
  • Apply this suggestion
Suggestion importance[1-10]: 7

__

Why: The error check is correctly placed after the Do() call, but the suggestion to use c.http instead of the local client is valid and important for maintaining proper error handling with the shared client.

Medium
  • More

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant