-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewriting the queue #78
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, comments are mostly suggestions
good job Coco it looks really nice 😊
(i'm already crying blood from thinking I'll need to merge this to my stale PR)
internal/pkg/queue/dequeue.go
Outdated
} | ||
|
||
// If we've checked all hosts and found no items, loop back to wait again | ||
return q.Dequeue() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you really sure you wanna go down the recursive path? what happens if a function calls Dequeue() when the queue is empty and mistakenly stackoverflow?
aligned queue structures to save space
|
Prometheus was always expecting to be there but API is not always set when crawls are ran. This resolves that issue.
make handover optional added lastAction on API worker state to troubleshoot made the default HQ batches smaller made handover hold the whole batch before enqueuing on disk handover now has an automatic closing mechanism that adapts to activity so the handover get closed and drained on disk if no workers need it
for enabling FireFox's json preview
uses fewer fsync()
time=xxxxxxxxx level=WARN msg="[WORKERS] Timeout reached. 2 workers still running" Workers get stuck in the second inner for. |
Improve WAL concurrency performance
Improve WAL concurrency performance by @yzqzss and make it optional
fix: hanging on indexManager.Close()
@yzqzss I think |
yeap, I think |
finally! |
@@ -37,6 +27,8 @@ type PersistentGroupedQueue struct { | |||
mutex sync.RWMutex | |||
statsMutex sync.RWMutex | |||
closed bool | |||
|
|||
logger *slog.Logger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logger has not been initialized 😱
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of this work is to replace the current queuing mechanism that use
github.com/beeker1121/goque
with a custom piece of code that exactly fit our needs and on which we will have much better control.Almost all queuing components have been rewritten except the host crawl rate limiter, I commented out the relevant parts with
// TODO: re-implement host limitation
, @NGTmeaty if you can look into that part as you were the original author, to make it fit in the current design, it would be nice. Having it as a separate package would also be great.I also separated the queue and seencheck packages, instead of having 1 big "frontier" package. FWIW, I'm also moving away calling it "frontier", it has only caused misunderstanding from the people I explained it too. (it was originally named like that to follow Heritrix3 path.. but let's clean things up and not use that term anymore)
Finally, I added a big list of statistics and infos about the queue that are available through the API at
/queue
.Suggestions & code-review are welcome and well-needed. :)