You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since we operate with plain data, we can easily detect duplicate tasks across multiple DAGs. For instance, several of our DAGs may have the same sensor which awaits the same partition of the same table. This means that we can run not several sensor processes, but only one if our scheduler could support deduplication feature. Quite good opportinuty to save some slots.
Which deduplication feature is a lot of scheduler one, scheduler itself couldn't implement it without additional help. And that help we should provide for it.
The text was updated successfully, but these errors were encountered:
Since we operate with plain data, we can easily detect duplicate tasks across multiple DAGs. For instance, several of our DAGs may have the same sensor which awaits the same partition of the same table. This means that we can run not several sensor processes, but only one if our scheduler could support deduplication feature. Quite good opportinuty to save some slots.
Which deduplication feature is a lot of scheduler one, scheduler itself couldn't implement it without additional help. And that help we should provide for it.
The text was updated successfully, but these errors were encountered: