You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should we add a FastAPI module so the crawler can have a REST API for launching a crawl. Possibly also for managing scope and seeds files? This could also expose crawl stats etc. The overall approach would be to avoid forcing services that interact with the crawler having to go through Kafka, at least for crawl launches.
This could be made consistent with the Browsertrix-Crawler API/model, so we essentially have two separate crawl engines that we can interact with in consistent ways.
The text was updated successfully, but these errors were encountered:
Should we add a FastAPI module so the crawler can have a REST API for launching a crawl. Possibly also for managing scope and seeds files? This could also expose crawl stats etc. The overall approach would be to avoid forcing services that interact with the crawler having to go through Kafka, at least for crawl launches.
This could be made consistent with the Browsertrix-Crawler API/model, so we essentially have two separate crawl engines that we can interact with in consistent ways.
The text was updated successfully, but these errors were encountered: