diff --git a/README.md b/README.md
index 6e8f056881..b3f57b0236 100644
--- a/README.md
+++ b/README.md
@@ -42,12 +42,6 @@ You can find tutorials and news in this blog: https://blog.taskforce.sh/
Do you need to work with BullMQ on platforms other than Node.js? If so, check out the [BullMQ Proxy](https://github.com/taskforcesh/bullmq-proxy)
-## 🌟 Rediscover Scale Conference 2024
-
-Discover the latest in in-memory and real-time data technologies at **Rediscover Scale 2024**. Ideal for engineers, architects, and technical leaders looking to push technological boundaries. Connect with experts and advance your skills at The Foundry SF, San Francisco.
-
-[Learn more and register here!](https://www.rediscoverscale.com/)
-
# Official FrontEnd
[](https://taskforce.sh)
diff --git a/docs/gitbook/README (1).md b/docs/gitbook/README (1).md
index 95f85fffbf..20de97fe4c 100644
--- a/docs/gitbook/README (1).md
+++ b/docs/gitbook/README (1).md
@@ -45,12 +45,19 @@ Jobs are added to the queue and can be processed at any time, with at least one
```typescript
import { Worker } from 'bullmq';
-
-const worker = new Worker('foo', async job => {
- // Will print { foo: 'bar'} for the first job
- // and { qux: 'baz' } for the second.
- console.log(job.data);
-});
+import IORedis from 'ioredis';
+
+const connection = new IORedis({ maxRetriesPerRequest: null });
+
+const worker = new Worker(
+ 'foo',
+ async job => {
+ // Will print { foo: 'bar'} for the first job
+ // and { qux: 'baz' } for the second.
+ console.log(job.data);
+ },
+ { connection },
+);
```
{% hint style="info" %}
diff --git a/docs/gitbook/SUMMARY.md b/docs/gitbook/SUMMARY.md
index 82a163692c..63efe7eed0 100644
--- a/docs/gitbook/SUMMARY.md
+++ b/docs/gitbook/SUMMARY.md
@@ -1,136 +1,138 @@
# Table of contents
-- [What is BullMQ](README.md)
-- [Quick Start]()
-- [API Reference](https://api.docs.bullmq.io)
-- [Changelogs](changelog.md)
- - [v4](changelogs/changelog-v4.md)
- - [v3](changelogs/changelog-v3.md)
- - [v2](changelogs/changelog-v2.md)
- - [v1](changelogs/changelog-v1.md)
+* [What is BullMQ](README.md)
+* [Quick Start]()
+* [API Reference](https://api.docs.bullmq.io)
+* [Changelogs](changelog.md)
+ * [v4](changelogs/changelog-v4.md)
+ * [v3](changelogs/changelog-v3.md)
+ * [v2](changelogs/changelog-v2.md)
+ * [v1](changelogs/changelog-v1.md)
## Guide
-- [Introduction](guide/introduction.md)
-- [Connections](guide/connections.md)
-- [Queues](guide/queues/README.md)
- - [Auto-removal of jobs](guide/queues/auto-removal-of-jobs.md)
- - [Adding jobs in bulk](guide/queues/adding-bulks.md)
- - [Global Concurrency](guide/queues/global-concurrency.md)
- - [Removing Jobs](guide/queues/removing-jobs.md)
-- [Workers](guide/workers/README.md)
- - [Auto-removal of jobs](guide/workers/auto-removal-of-jobs.md)
- - [Concurrency](guide/workers/concurrency.md)
- - [Graceful shutdown](guide/workers/graceful-shutdown.md)
- - [Stalled Jobs](guide/workers/stalled-jobs.md)
- - [Sandboxed processors](guide/workers/sandboxed-processors.md)
- - [Pausing queues](guide/workers/pausing-queues.md)
-- [Jobs](guide/jobs/README.md)
- - [FIFO](guide/jobs/fifo.md)
- - [LIFO](guide/jobs/lifo.md)
- - [Job Ids](guide/jobs/job-ids.md)
- - [Job Data](guide/jobs/job-data.md)
- - [Deduplication](guide/jobs/deduplication.md)
- - [Delayed](guide/jobs/delayed.md)
- - [Repeatable](guide/jobs/repeatable.md)
- - [Prioritized](guide/jobs/prioritized.md)
- - [Removing jobs](guide/jobs/removing-job.md)
- - [Stalled](guide/jobs/stalled.md)
- - [Getters](guide/jobs/getters.md)
-- [Job Schedulers](guide/job-schedulers/README.md)
- - [Repeat Strategies](guide/job-schedulers/repeat-strategies.md)
- - [Repeat options](guide/job-schedulers/repeat-options.md)
- - [Manage Job Schedulers](guide/job-schedulers/manage-job-schedulers.md)
-- [Flows](guide/flows/README.md)
- - [Adding flows in bulk](guide/flows/adding-bulks.md)
- - [Get Flow Tree](guide/flows/get-flow-tree.md)
- - [Fail Parent](guide/flows/fail-parent.md)
- - [Remove Dependency](guide/flows/remove-dependency.md)
- - [Ignore Dependency](guide/flows/ignore-dependency.md)
- - [Remove Child Dependency](guide/flows/remove-child-dependency.md)
-- [Metrics](guide/metrics/metrics.md)
-- [Rate limiting](guide/rate-limiting.md)
-- [Parallelism and Concurrency](guide/parallelism-and-concurrency.md)
-- [Retrying failing jobs](guide/retrying-failing-jobs.md)
-- [Returning job data](guide/returning-job-data.md)
-- [Events](guide/events/README.md)
- - [Create Custom Events](guide/events/create-custom-events.md)
-- [Telemetry](guide/telemetry/README.md)
- - [Getting started](guide/telemetry/getting-started.md)
- - [Running Jaeger](guide/telemetry/running-jaeger.md)
- - [Running a simple example](guide/telemetry/running-a-simple-example.md)
-- [QueueScheduler](guide/queuescheduler.md)
-- [Redisâ„¢ Compatibility](guide/redis-tm-compatibility/README.md)
- - [Dragonfly](guide/redis-tm-compatibility/dragonfly.md)
-- [Redisâ„¢ hosting](guide/redis-tm-hosting/README.md)
- - [AWS MemoryDB](guide/redis-tm-hosting/aws-memorydb.md)
- - [AWS Elasticache](guide/redis-tm-hosting/aws-elasticache.md)
-- [Architecture](guide/architecture.md)
-- [NestJs](guide/nestjs/README.md)
- - [Producers](guide/nestjs/producers.md)
- - [Queue Events Listeners](guide/nestjs/queue-events-listeners.md)
-- [Going to production](guide/going-to-production.md)
-- [Migration to newer versions](guide/migrations/migration-to-newer-versions.md)
- - [Version 6](guide/migrations/v6.md)
-- [Troubleshooting](guide/troubleshooting.md)
+* [Introduction](guide/introduction.md)
+* [Connections](guide/connections.md)
+* [Queues](guide/queues/README.md)
+ * [Auto-removal of jobs](guide/queues/auto-removal-of-jobs.md)
+ * [Adding jobs in bulk](guide/queues/adding-bulks.md)
+ * [Global Concurrency](guide/queues/global-concurrency.md)
+ * [Removing Jobs](guide/queues/removing-jobs.md)
+* [Workers](guide/workers/README.md)
+ * [Auto-removal of jobs](guide/workers/auto-removal-of-jobs.md)
+ * [Concurrency](guide/workers/concurrency.md)
+ * [Graceful shutdown](guide/workers/graceful-shutdown.md)
+ * [Stalled Jobs](guide/workers/stalled-jobs.md)
+ * [Sandboxed processors](guide/workers/sandboxed-processors.md)
+ * [Pausing queues](guide/workers/pausing-queues.md)
+* [Jobs](guide/jobs/README.md)
+ * [FIFO](guide/jobs/fifo.md)
+ * [LIFO](guide/jobs/lifo.md)
+ * [Job Ids](guide/jobs/job-ids.md)
+ * [Job Data](guide/jobs/job-data.md)
+ * [Deduplication](guide/jobs/deduplication.md)
+ * [Delayed](guide/jobs/delayed.md)
+ * [Repeatable](guide/jobs/repeatable.md)
+ * [Prioritized](guide/jobs/prioritized.md)
+ * [Removing jobs](guide/jobs/removing-job.md)
+ * [Stalled](guide/jobs/stalled.md)
+ * [Getters](guide/jobs/getters.md)
+* [Job Schedulers](guide/job-schedulers/README.md)
+ * [Repeat Strategies](guide/job-schedulers/repeat-strategies.md)
+ * [Repeat options](guide/job-schedulers/repeat-options.md)
+ * [Manage Job Schedulers](guide/job-schedulers/manage-job-schedulers.md)
+* [Flows](guide/flows/README.md)
+ * [Adding flows in bulk](guide/flows/adding-bulks.md)
+ * [Get Flow Tree](guide/flows/get-flow-tree.md)
+ * [Fail Parent](guide/flows/fail-parent.md)
+ * [Remove Dependency](guide/flows/remove-dependency.md)
+ * [Ignore Dependency](guide/flows/ignore-dependency.md)
+ * [Remove Child Dependency](guide/flows/remove-child-dependency.md)
+* [Metrics](guide/metrics/metrics.md)
+* [Rate limiting](guide/rate-limiting.md)
+* [Parallelism and Concurrency](guide/parallelism-and-concurrency.md)
+* [Retrying failing jobs](guide/retrying-failing-jobs.md)
+* [Returning job data](guide/returning-job-data.md)
+* [Events](guide/events/README.md)
+ * [Create Custom Events](guide/events/create-custom-events.md)
+* [Telemetry](guide/telemetry/README.md)
+ * [Getting started](guide/telemetry/getting-started.md)
+ * [Running Jaeger](guide/telemetry/running-jaeger.md)
+ * [Running a simple example](guide/telemetry/running-a-simple-example.md)
+* [QueueScheduler](guide/queuescheduler.md)
+* [Redisâ„¢ Compatibility](guide/redis-tm-compatibility/README.md)
+ * [Dragonfly](guide/redis-tm-compatibility/dragonfly.md)
+* [Redisâ„¢ hosting](guide/redis-tm-hosting/README.md)
+ * [AWS MemoryDB](guide/redis-tm-hosting/aws-memorydb.md)
+ * [AWS Elasticache](guide/redis-tm-hosting/aws-elasticache.md)
+* [Architecture](guide/architecture.md)
+* [NestJs](guide/nestjs/README.md)
+ * [Producers](guide/nestjs/producers.md)
+ * [Queue Events Listeners](guide/nestjs/queue-events-listeners.md)
+* [Going to production](guide/going-to-production.md)
+* [Migration to newer versions](guide/migration-to-newer-versions.md)
+ * [Version 6](guide/migrations/v6.md)
+* [Troubleshooting](guide/troubleshooting.md)
## Patterns
-- [Adding jobs in bulk across different queues](patterns/adding-bulks.md)
-- [Manually processing jobs](patterns/manually-fetching-jobs.md)
-- [Named Processor](patterns/named-processor.md)
-- [Flows](patterns/flows.md)
-- [Idempotent jobs](patterns/idempotent-jobs.md)
-- [Throttle jobs](patterns/throttle-jobs.md)
-- [Process Step Jobs](patterns/process-step-jobs.md)
-- [Failing fast when Redis is down](patterns/failing-fast-when-redis-is-down.md)
-- [Stop retrying jobs](patterns/stop-retrying-jobs.md)
-- [Timeout jobs](patterns/timeout-jobs.md)
-- [Redis Cluster](patterns/redis-cluster.md)
+* [Adding jobs in bulk across different queues](patterns/adding-bulks.md)
+* [Manually processing jobs](patterns/manually-fetching-jobs.md)
+* [Named Processor](patterns/named-processor.md)
+* [Flows](patterns/flows.md)
+* [Idempotent jobs](patterns/idempotent-jobs.md)
+* [Throttle jobs](patterns/throttle-jobs.md)
+* [Process Step Jobs](patterns/process-step-jobs.md)
+* [Failing fast when Redis is down](patterns/failing-fast-when-redis-is-down.md)
+* [Stop retrying jobs](patterns/stop-retrying-jobs.md)
+* [Timeout jobs](patterns/timeout-jobs.md)
+* [Redis Cluster](patterns/redis-cluster.md)
## BullMQ Pro
-- [Introduction](bullmq-pro/introduction.md)
-- [Install](bullmq-pro/install.md)
-- [Observables](bullmq-pro/observables/README.md)
- - [Cancelation](bullmq-pro/observables/cancelation.md)
-- [Groups](bullmq-pro/groups/README.md)
- - [Getters](bullmq-pro/groups/getters.md)
- - [Rate limiting](bullmq-pro/groups/rate-limiting.md)
- - [Concurrency](bullmq-pro/groups/concurrency.md)
- - [Local group concurrency](bullmq-pro/groups/local-group-concurrency.md)
- - [Max group size](bullmq-pro/groups/max-group-size.md)
- - [Pausing groups](bullmq-pro/groups/pausing-groups.md)
- - [Prioritized intra-groups](bullmq-pro/groups/prioritized.md)
- - [Sandboxes for groups](bullmq-pro/groups/sandboxes-for-groups.md)
-- [Batches](bullmq-pro/batches.md)
-- [NestJs](bullmq-pro/nestjs/README.md)
- - [Producers](bullmq-pro/nestjs/producers.md)
- - [Queue Events Listeners](bullmq-pro/nestjs/queue-events-listeners.md)
- - [API Reference](https://nestjs.bullmq.pro/)
- - [Changelog](bullmq-pro/nestjs/changelog.md)
-- [API Reference](https://api.bullmq.pro)
-- [Changelog](bullmq-pro/changelog.md)
-- [Support](bullmq-pro/support.md)
+* [Introduction](bullmq-pro/introduction.md)
+* [Install](bullmq-pro/install.md)
+* [Observables](bullmq-pro/observables/README.md)
+ * [Cancelation](bullmq-pro/observables/cancelation.md)
+* [Groups](bullmq-pro/groups/README.md)
+ * [Getters](bullmq-pro/groups/getters.md)
+ * [Rate limiting](bullmq-pro/groups/rate-limiting.md)
+ * [Concurrency](bullmq-pro/groups/concurrency.md)
+ * [Local group concurrency](bullmq-pro/groups/local-group-concurrency.md)
+ * [Max group size](bullmq-pro/groups/max-group-size.md)
+ * [Pausing groups](bullmq-pro/groups/pausing-groups.md)
+ * [Prioritized intra-groups](bullmq-pro/groups/prioritized.md)
+ * [Sandboxes for groups](bullmq-pro/groups/sandboxes-for-groups.md)
+* [Telemetry](bullmq-pro/telemetry.md)
+* [Batches](bullmq-pro/batches.md)
+* [NestJs](bullmq-pro/nestjs/README.md)
+ * [Producers](bullmq-pro/nestjs/producers.md)
+ * [Queue Events Listeners](bullmq-pro/nestjs/queue-events-listeners.md)
+ * [API Reference](https://nestjs.bullmq.pro/)
+ * [Changelog](bullmq-pro/nestjs/changelog.md)
+* [API Reference](https://api.bullmq.pro)
+* [Changelog](bullmq-pro/changelog.md)
+* [New Releases](bullmq-pro/new-releases.md)
+* [Support](bullmq-pro/support.md)
## Bull
-- [Introduction](bull/introduction.md)
-- [Install](bull/install.md)
-- [Quick Guide](bull/quick-guide.md)
-- [Important Notes](bull/important-notes.md)
-- [Reference](https://github.com/OptimalBits/bull/blob/develop/REFERENCE.md)
-- [Patterns](bull/patterns/README.md)
- - [Persistent connections](bull/patterns/persistent-connections.md)
- - [Message queue](bull/patterns/message-queue.md)
- - [Returning Job Completions](bull/patterns/returning-job-completions.md)
- - [Reusing Redis Connections](bull/patterns/reusing-redis-connections.md)
- - [Redis cluster](bull/patterns/redis-cluster.md)
- - [Custom backoff strategy](bull/patterns/custom-backoff-strategy.md)
- - [Debugging](bull/patterns/debugging.md)
- - [Manually fetching jobs](bull/patterns/manually-fetching-jobs.md)
+* [Introduction](bull/introduction.md)
+* [Install](bull/install.md)
+* [Quick Guide](bull/quick-guide.md)
+* [Important Notes](bull/important-notes.md)
+* [Reference](https://github.com/OptimalBits/bull/blob/develop/REFERENCE.md)
+* [Patterns](bull/patterns/README.md)
+ * [Persistent connections](bull/patterns/persistent-connections.md)
+ * [Message queue](bull/patterns/message-queue.md)
+ * [Returning Job Completions](bull/patterns/returning-job-completions.md)
+ * [Reusing Redis Connections](bull/patterns/reusing-redis-connections.md)
+ * [Redis cluster](bull/patterns/redis-cluster.md)
+ * [Custom backoff strategy](bull/patterns/custom-backoff-strategy.md)
+ * [Debugging](bull/patterns/debugging.md)
+ * [Manually fetching jobs](bull/patterns/manually-fetching-jobs.md)
## Python
-- [Introduction](python/introduction.md)
-- [Changelog](python/changelog.md)
+* [Introduction](python/introduction.md)
+* [Changelog](python/changelog.md)
diff --git a/docs/gitbook/bullmq-pro/new-releases.md b/docs/gitbook/bullmq-pro/new-releases.md
new file mode 100644
index 0000000000..c11fdbc7c7
--- /dev/null
+++ b/docs/gitbook/bullmq-pro/new-releases.md
@@ -0,0 +1,5 @@
+# New Releases
+
+If you want to get notifications when we do a new release of BullMQ Pro, please enable notifications on this Github issue where we automatically create a new comment for every new release:
+
+[https://github.com/taskforcesh/bullmq-pro-support/issues/86](https://github.com/taskforcesh/bullmq-pro-support/issues/86)
diff --git a/docs/gitbook/bullmq-pro/telemetry.md b/docs/gitbook/bullmq-pro/telemetry.md
new file mode 100644
index 0000000000..15c01adf32
--- /dev/null
+++ b/docs/gitbook/bullmq-pro/telemetry.md
@@ -0,0 +1,49 @@
+# Telemetry
+
+In the same fashion we support telemetry in BullMQ open source edition, we also support telemetry for BullMQ Pro. It works basically the same, in fact you can just the same integrations available for BullMQ in the Pro version. So in order to enable it you would do something like this:
+
+```typescript
+import { QueuePro } from '@taskforcesh/bullmq-pro';
+import { BullMQOtel } from 'bullmq-otel';
+
+// Initialize a Pro queue using BullMQ-Otel
+const queue = new QueuePro('myProQueue', {
+ connection,
+ telemetry: new BullMQOtel('guide'),
+});
+
+await queue.add(
+ 'myJob',
+ { data: 'myData' },
+ {
+ attempts: 2,
+ backoff: 1000,
+ group: {
+ id: 'myGroupId',
+ },
+ },
+);
+```
+
+For the Worker we will do it in a similar way:
+
+```typescript
+import { WorkerPro } from '@taskforcesh/bullmq-pro';
+import { BullMQOtel } from 'bullmq-otel';
+
+const worker = new WorkerPro(
+ 'myProQueue',
+ async job => {
+ console.log('processing job', job.id);
+ },
+ {
+ name: 'myWorker',
+ connection,
+ telemetry: new BullMQOtel('guide'),
+ concurrency: 10,
+ batch: { size: 10 },
+ },
+);
+```
+
+For an introductury guide on how to integrate OpenTelemetry in you BullMQ applications take a look at this tutorial: [https://blog.taskforce.sh/how-to-integrate-bullmqs-telemetry-on-a-newsletters-subscription-application-2/](https://blog.taskforce.sh/how-to-integrate-bullmqs-telemetry-on-a-newsletters-subscription-application-2/)
diff --git a/docs/gitbook/changelog.md b/docs/gitbook/changelog.md
index e483290cde..5f9602872b 100644
--- a/docs/gitbook/changelog.md
+++ b/docs/gitbook/changelog.md
@@ -1,3 +1,94 @@
+## [5.34.2](https://github.com/taskforcesh/bullmq/compare/v5.34.1...v5.34.2) (2024-12-14)
+
+
+### Bug Fixes
+
+* **scripts:** make sure jobs fields are not empty before unpack ([4360572](https://github.com/taskforcesh/bullmq/commit/4360572745a929c7c4f6266ec03d4eba77a9715c))
+
+## [5.34.1](https://github.com/taskforcesh/bullmq/compare/v5.34.0...v5.34.1) (2024-12-13)
+
+
+### Bug Fixes
+
+* guarantee every repeatable jobs are slotted ([9917df1](https://github.com/taskforcesh/bullmq/commit/9917df166aff2e2f143c45297f41ac8520bfc8ae))
+* **job-scheduler:** avoid duplicated delayed jobs when repeatable jobs are retried ([af75315](https://github.com/taskforcesh/bullmq/commit/af75315f0c7923f5e0a667a9ed4606b28b89b719))
+
+# [5.34.0](https://github.com/taskforcesh/bullmq/compare/v5.33.1...v5.34.0) (2024-12-10)
+
+
+### Features
+
+* **telemetry:** add option to omit context propagation on jobs ([#2946](https://github.com/taskforcesh/bullmq/issues/2946)) ([6514c33](https://github.com/taskforcesh/bullmq/commit/6514c335231cb6e727819cf5e0c56ed3f5132838))
+
+## [5.33.1](https://github.com/taskforcesh/bullmq/compare/v5.33.0...v5.33.1) (2024-12-10)
+
+
+### Bug Fixes
+
+* **job-scheduler:** omit deduplication and debounce options from template options ([#2960](https://github.com/taskforcesh/bullmq/issues/2960)) ([b5fa6a3](https://github.com/taskforcesh/bullmq/commit/b5fa6a3208a8f2a39777dc30c2db2f498addb907))
+
+# [5.33.0](https://github.com/taskforcesh/bullmq/compare/v5.32.0...v5.33.0) (2024-12-09)
+
+
+### Features
+
+* replace multi by lua scripts in moveToFailed ([#2958](https://github.com/taskforcesh/bullmq/issues/2958)) ([c19c914](https://github.com/taskforcesh/bullmq/commit/c19c914969169c660a3e108126044c5152faf0cd))
+
+# [5.32.0](https://github.com/taskforcesh/bullmq/compare/v5.31.2...v5.32.0) (2024-12-08)
+
+
+### Features
+
+* **queue:** enhance getJobSchedulers method to include template information ([#2956](https://github.com/taskforcesh/bullmq/issues/2956)) ref [#2875](https://github.com/taskforcesh/bullmq/issues/2875) ([5b005cd](https://github.com/taskforcesh/bullmq/commit/5b005cd94ba0f98677bed4a44f8669c81f073f26))
+
+## [5.31.2](https://github.com/taskforcesh/bullmq/compare/v5.31.1...v5.31.2) (2024-12-06)
+
+
+### Bug Fixes
+
+* **worker:** catch connection error when moveToActive is called ([#2952](https://github.com/taskforcesh/bullmq/issues/2952)) ([544fc7c](https://github.com/taskforcesh/bullmq/commit/544fc7c9e4755e6b62b82216e25c0cb62734ed59))
+
+## [5.31.1](https://github.com/taskforcesh/bullmq/compare/v5.31.0...v5.31.1) (2024-12-04)
+
+
+### Bug Fixes
+
+* **scheduler-template:** remove console.log when getting template information ([#2950](https://github.com/taskforcesh/bullmq/issues/2950)) ([3402bfe](https://github.com/taskforcesh/bullmq/commit/3402bfe0d01e5e5205db74d2106cd19d7df53fcb))
+
+# [5.31.0](https://github.com/taskforcesh/bullmq/compare/v5.30.1...v5.31.0) (2024-12-02)
+
+
+### Features
+
+* **queue:** enhance getJobScheduler method to include template information ([#2929](https://github.com/taskforcesh/bullmq/issues/2929)) ref [#2875](https://github.com/taskforcesh/bullmq/issues/2875) ([cb99080](https://github.com/taskforcesh/bullmq/commit/cb990808db19dd79b5048ee99308fa7d1eaa2e9f))
+
+## [5.30.1](https://github.com/taskforcesh/bullmq/compare/v5.30.0...v5.30.1) (2024-11-30)
+
+
+### Bug Fixes
+
+* **flow:** allow using removeOnFail and failParentOnFailure in parents ([#2947](https://github.com/taskforcesh/bullmq/issues/2947)) fixes [#2229](https://github.com/taskforcesh/bullmq/issues/2229) ([85f6f6f](https://github.com/taskforcesh/bullmq/commit/85f6f6f181003fafbf75304a268170f0d271ccc3))
+
+# [5.30.0](https://github.com/taskforcesh/bullmq/compare/v5.29.1...v5.30.0) (2024-11-29)
+
+
+### Bug Fixes
+
+* **job-scheduler:** upsert template when same pattern options are provided ([#2943](https://github.com/taskforcesh/bullmq/issues/2943)) ref [#2940](https://github.com/taskforcesh/bullmq/issues/2940) ([b56c3b4](https://github.com/taskforcesh/bullmq/commit/b56c3b45a87e52f5faf25406a2b992d1bfed4900))
+
+
+### Features
+
+* **queue:** add getDelayedCount method [python] ([#2934](https://github.com/taskforcesh/bullmq/issues/2934)) ([71ce75c](https://github.com/taskforcesh/bullmq/commit/71ce75c04b096b5593da0986c41a771add1a81ce))
+* **queue:** add getJobSchedulersCount method ([#2945](https://github.com/taskforcesh/bullmq/issues/2945)) ([38820dc](https://github.com/taskforcesh/bullmq/commit/38820dc8c267c616ada9931198e9e3e9d2f0d536))
+
+## [5.29.1](https://github.com/taskforcesh/bullmq/compare/v5.29.0...v5.29.1) (2024-11-23)
+
+
+### Bug Fixes
+
+* **scheduler:** remove deprecation warning on immediately option ([#2923](https://github.com/taskforcesh/bullmq/issues/2923)) ([14ca7f4](https://github.com/taskforcesh/bullmq/commit/14ca7f44f31a393a8b6d0ce4ed244e0063198879))
+
# [5.29.0](https://github.com/taskforcesh/bullmq/compare/v5.28.2...v5.29.0) (2024-11-22)
diff --git a/docs/gitbook/guide/connections.md b/docs/gitbook/guide/connections.md
index 54e1a2bd53..40adadbc77 100644
--- a/docs/gitbook/guide/connections.md
+++ b/docs/gitbook/guide/connections.md
@@ -7,32 +7,59 @@ Every class will consume at least one Redis connection, but it is also possible
Some examples:
```typescript
-import { Queue, Worker } from 'bullmq'
+import { Queue, Worker } from 'bullmq';
// Create a new connection in every instance
-const myQueue = new Queue('myqueue', { connection: {
- host: "myredis.taskforce.run",
- port: 32856
-}});
-
-const myWorker = new Worker('myqueue', async (job)=>{}, { connection: {
- host: "myredis.taskforce.run",
- port: 32856
-}});
+const myQueue = new Queue('myqueue', {
+ connection: {
+ host: 'myredis.taskforce.run',
+ port: 32856,
+ },
+});
+
+const myWorker = new Worker('myqueue', async job => {}, {
+ connection: {
+ host: 'myredis.taskforce.run',
+ port: 32856,
+ },
+});
```
```typescript
-import { Queue, Worker } from 'bullmq';
+import { Queue } from 'bullmq';
import IORedis from 'ioredis';
const connection = new IORedis();
-// Reuse the ioredis instance
-const myQueue = new Queue('myqueue', { connection });
-const myWorker = new Worker('myqueue', async (job)=>{}, { connection });
+// Reuse the ioredis instance in 2 different producers
+const myFirstQueue = new Queue('myFirstQueue', { connection });
+const mySecondQueue = new Queue('mySecondQueue', { connection });
```
-Note that in the second example, even though the ioredis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections. Consult the [ioredis](https://github.com/luin/ioredis/blob/master/API.md) documentation to learn how to properly create an instance of `IORedis.`
+```typescript
+import { Worker } from 'bullmq';
+import IORedis from 'ioredis';
+
+const connection = new IORedis({ maxRetriesPerRequest: null });
+
+// Reuse the ioredis instance in 2 different consumers
+const myFirstWorker = new Worker('myFirstWorker', async job => {}, {
+ connection,
+});
+const mySecondWorker = new Worker('mySecondWorker', async job => {}, {
+ connection,
+});
+```
+
+Note that in the third example, even though the ioredis instance is being reused, the worker will create a duplicated connection that it needs internally to make blocking connections. Consult the [ioredis](https://github.com/luin/ioredis/blob/master/API.md) documentation to learn how to properly create an instance of `IORedis`.
+
+Also note that simple Queue instance used for managing the queue such as adding jobs, pausing, using getters, etc. usually has different requirements from the worker.
+
+For example, say that you are adding jobs to a queue as the result of a call to an HTTP endpoint - producer service. The caller of this endpoint cannot wait forever if the connection to Redis happens to be down when this call is made. Therefore the `maxRetriesPerRequest` setting should either be left at its default (which currently is 20) or set it to another value, maybe 1 so that the user gets an error quickly and can retry later.
+
+On the other hand, if you are adding jobs inside a Worker processor, this process is expected to happen in the background - consumer service. In this case you can share the same connection.
+
+For more details, refer to the [persistent connections](https://docs.bullmq.io/bull/patterns/persistent-connections) page.
{% hint style="danger" %}
When using ioredis connections, be careful not to use the "keyPrefix" option in [ioredis](https://redis.github.io/ioredis/interfaces/CommonRedisOptions.html#keyPrefix) as this option is not compatible with BullMQ, which provides its own key prefixing mechanism.
diff --git a/docs/gitbook/guide/job-schedulers/README.md b/docs/gitbook/guide/job-schedulers/README.md
index f7bb35af6d..2320ae9e59 100644
--- a/docs/gitbook/guide/job-schedulers/README.md
+++ b/docs/gitbook/guide/job-schedulers/README.md
@@ -51,3 +51,7 @@ All jobs produced by this scheduler will use the given settings. Note that in th
{% hint style="info" %}
Since jobs produced by the Job Scheduler will get a special job ID in order to guarantee that jobs will never be created more often than the given repeat settings, you cannot choose a custom job id. However you can use the job's name if you need to discriminate these jobs from other jobs.
{% endhint %}
+
+## Read more:
+
+- 💡 [Upsert Job Scheduler API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#upsertJobScheduler)
diff --git a/docs/gitbook/guide/job-schedulers/manage-job-schedulers.md b/docs/gitbook/guide/job-schedulers/manage-job-schedulers.md
index 4531a20173..d363b1c635 100644
--- a/docs/gitbook/guide/job-schedulers/manage-job-schedulers.md
+++ b/docs/gitbook/guide/job-schedulers/manage-job-schedulers.md
@@ -4,7 +4,7 @@ In BullMQ, managing the lifecycle and inventory of job schedulers is crucial for
#### Remove job scheduler
-The removeJobScheduler method is designed to delete a specific job scheduler from the queue. This is particularly useful when a scheduled task is no longer needed or if you wish to clean up inactive or obsolete schedulers to optimize resource usage.
+The **removeJobScheduler** method is designed to delete a specific job scheduler from the queue. This is particularly useful when a scheduled task is no longer needed or if you wish to clean up inactive or obsolete schedulers to optimize resource usage.
```typescript
// Remove a job scheduler with ID 'scheduler-123'
@@ -18,7 +18,7 @@ The method will return true if there was a Job Scheduler to remove with the give
#### Get Job Schedulers
-The getJobSchedulers method retrieves a list of all configured job schedulers within a specified range. This is invaluable for monitoring and managing multiple job schedulers, especially in systems where jobs are dynamically scheduled and require frequent reviews or adjustments.
+The **getJobSchedulers** method retrieves a list of all configured job schedulers within a specified range. This is invaluable for monitoring and managing multiple job schedulers, especially in systems where jobs are dynamically scheduled and require frequent reviews or adjustments.
```typescript
// Retrieve the first 10 job schedulers in ascending order of their next execution time
@@ -27,3 +27,18 @@ console.log('Current job schedulers:', schedulers);
```
This method can be particularly useful for generating reports or dashboards that provide insights into when jobs are scheduled to run, aiding in system monitoring and troubleshooting.
+
+#### Get Job Scheduler
+
+The **getJobScheduler** method retrieves a job scheduler by id. This is invaluable for inspecting dedicated configurations.
+
+```typescript
+const scheduler = await queue.getJobScheduler('test');
+console.log('Current job scheduler:', scheduler);
+```
+
+## Read more:
+
+- 💡 [Remove Job Scheduler API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#removeJobScheduler)
+- 💡 [Get Job Schedulers API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#getJobSchedulers)
+- 💡 [Get Job Scheduler API Reference](https://api.docs.bullmq.io/classes/v5.Queue.html#getJobScheduler)
diff --git a/docs/gitbook/python/changelog.md b/docs/gitbook/python/changelog.md
index 6bdea9577d..ec2a2753ea 100644
--- a/docs/gitbook/python/changelog.md
+++ b/docs/gitbook/python/changelog.md
@@ -2,23 +2,24 @@
+## v2.11.0 (2024-11-26)
+### Feature
+* **queue:** Add getDelayedCount method [python] ([#2934](https://github.com/taskforcesh/bullmq/issues/2934)) ([`71ce75c`](https://github.com/taskforcesh/bullmq/commit/71ce75c04b096b5593da0986c41a771add1a81ce))
+
+### Performance
+* **marker:** Add base markers while consuming jobs to get workers busy (#2904) fixes #2842 ([`1759c8b`](https://github.com/taskforcesh/bullmq/commit/1759c8bc111cab9e43d5fccb4d8d2dccc9c39fb4))
+
## v2.10.1 (2024-10-26)
### Fix
* **commands:** Add missing build statement when releasing [python] (#2869) fixes #2868 ([`ff2a47b`](https://github.com/taskforcesh/bullmq/commit/ff2a47b37c6b36ee1a725f91de2c6e4bcf8b011a))
-### Documentation
-* **job:** Clarify per-queue scoping of job ids ([#2864](https://github.com/taskforcesh/bullmq/issues/2864)) ([`6c2b80f`](https://github.com/taskforcesh/bullmq/commit/6c2b80f490a0ab4afe502fc8415d6549e0022367))
-* **v4:** Update changelog with v4.18.2 ([#2867](https://github.com/taskforcesh/bullmq/issues/2867)) ([`7ba452e`](https://github.com/taskforcesh/bullmq/commit/7ba452ec3e6d4658e357b2ca810893172c1e0b25))
-
## v2.10.0 (2024-10-24)
### Feature
* **job:** Add getChildrenValues method [python] ([#2853](https://github.com/taskforcesh/bullmq/issues/2853)) ([`0f25213`](https://github.com/taskforcesh/bullmq/commit/0f25213b28900a1c35922bd33611701629d83184))
* **queue:** Add option to skip metas update ([`b7dd925`](https://github.com/taskforcesh/bullmq/commit/b7dd925e7f2a4468c98a05f3a3ca1a476482b6c0))
* **queue:** Add queue version support ([#2822](https://github.com/taskforcesh/bullmq/issues/2822)) ([`3a4781b`](https://github.com/taskforcesh/bullmq/commit/3a4781bf7cadf04f6a324871654eed8f01cdadae))
-* **repeat:** Deprecate immediately on job scheduler ([`ed047f7`](https://github.com/taskforcesh/bullmq/commit/ed047f7ab69ebdb445343b6cb325e90b95ee9dc5))
* **job:** Expose priority value ([#2804](https://github.com/taskforcesh/bullmq/issues/2804)) ([`9abec3d`](https://github.com/taskforcesh/bullmq/commit/9abec3dbc4c69f2496c5ff6b5d724f4d1a5ca62f))
* **job:** Add deduplication logic ([#2796](https://github.com/taskforcesh/bullmq/issues/2796)) ([`0a4982d`](https://github.com/taskforcesh/bullmq/commit/0a4982d05d27c066248290ab9f59349b802d02d5))
-* **queue:** Add new upsertJobScheduler, getJobSchedulers and removeJobSchedulers methods ([`dd6b6b2`](https://github.com/taskforcesh/bullmq/commit/dd6b6b2263badd8f29db65d1fa6bcdf5a1e9f6e2))
* **queue:** Add getDebounceJobId method ([#2717](https://github.com/taskforcesh/bullmq/issues/2717)) ([`a68ead9`](https://github.com/taskforcesh/bullmq/commit/a68ead95f32a7d9dabba602895d05c22794b2c02))
### Fix
diff --git a/package.json b/package.json
index e9a733ea4d..309043f090 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "bullmq",
- "version": "5.29.0",
+ "version": "5.34.2",
"description": "Queue for messages and jobs based on Redis",
"homepage": "https://bullmq.io/",
"main": "./dist/cjs/index.js",
@@ -80,7 +80,7 @@
"@types/msgpack": "^0.0.31",
"@types/node": "^12.20.25",
"@types/semver": "^7.3.9",
- "@types/sinon": "^7.5.2",
+ "@types/sinon": "^10.0.13",
"@types/uuid": "^3.4.10",
"@typescript-eslint/eslint-plugin": "^4.32.0",
"@typescript-eslint/parser": "^5.33.0",
@@ -112,7 +112,7 @@
"rimraf": "^3.0.2",
"rrule": "^2.6.9",
"semantic-release": "^19.0.3",
- "sinon": "^15.1.0",
+ "sinon": "^18.0.1",
"test-console": "^2.0.0",
"ts-mocha": "^10.0.0",
"ts-node": "^10.7.0",
diff --git a/python/bullmq/__init__.py b/python/bullmq/__init__.py
index 1db7674593..8a8ee15cf7 100644
--- a/python/bullmq/__init__.py
+++ b/python/bullmq/__init__.py
@@ -3,7 +3,7 @@
A background job processor and message queue for Python based on Redis.
"""
-__version__ = "2.10.1"
+__version__ = "2.11.0"
__author__ = 'Taskforce.sh Inc.'
__credits__ = 'Taskforce.sh Inc.'
diff --git a/python/bullmq/queue.py b/python/bullmq/queue.py
index 91e18a37a9..e735b6a3fd 100644
--- a/python/bullmq/queue.py
+++ b/python/bullmq/queue.py
@@ -254,6 +254,9 @@ def getJobState(self, job_id: str):
def getCompletedCount(self):
return self.getJobCountByTypes('completed')
+ def getDelayedCount(self):
+ return self.getJobCountByTypes('delayed')
+
def getFailedCount(self):
return self.getJobCountByTypes('failed')
diff --git a/python/pyproject.toml b/python/pyproject.toml
index 68ad2bc0da..601b202b37 100644
--- a/python/pyproject.toml
+++ b/python/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "bullmq"
-version = "2.10.1"
+version = "2.11.0"
description='BullMQ for Python'
readme="README.md"
authors = [
diff --git a/python/tests/queue_tests.py b/python/tests/queue_tests.py
index 94bdb7da6a..5ffeaf5f38 100644
--- a/python/tests/queue_tests.py
+++ b/python/tests/queue_tests.py
@@ -54,7 +54,7 @@ async def test_get_job_state(self):
async def test_add_job_with_options(self):
queue = Queue(queueName)
data = {"foo": "bar"}
- attempts = 3,
+ attempts = 3
delay = 1000
job = await queue.add("test-job", data=data, opts={"attempts": attempts, "delay": delay})
@@ -133,6 +133,18 @@ async def test_trim_events_manually_with_custom_prefix(self):
await queue.obliterate()
await queue.close()
+ async def test_get_delayed_count(self):
+ queue = Queue(queueName)
+ data = {"foo": "bar"}
+ delay = 1000
+ await queue.add("test-job", data=data, opts={"delay": delay})
+ await queue.add("test-job", data=data, opts={"delay": delay * 2})
+
+ count = await queue.getDelayedCount()
+ self.assertEqual(count, 2)
+
+ await queue.close()
+
async def test_retry_failed_jobs(self):
queue = Queue(queueName)
job_count = 8
diff --git a/src/classes/flow-producer.ts b/src/classes/flow-producer.ts
index f732fd4e11..a513275f5b 100644
--- a/src/classes/flow-producer.ts
+++ b/src/classes/flow-producer.ts
@@ -332,11 +332,27 @@ export class FlowProducer extends EventEmitter {
node.name,
'addNode',
node.queueName,
- async (span, dstPropagationMetadata) => {
+ async (span, srcPropagationMedatada) => {
span?.setAttributes({
[TelemetryAttributes.JobName]: node.name,
[TelemetryAttributes.JobId]: jobId,
});
+ const opts = node.opts;
+ let telemetry = opts?.telemetry;
+
+ if (srcPropagationMedatada && opts) {
+ const omitContext = opts.telemetry?.omitContext;
+ const telemetryMetadata =
+ opts.telemetry?.metadata ||
+ (!omitContext && srcPropagationMedatada);
+
+ if (telemetryMetadata || omitContext) {
+ telemetry = {
+ metadata: telemetryMetadata,
+ omitContext,
+ };
+ }
+ }
const job = new this.Job(
queue,
@@ -344,9 +360,9 @@ export class FlowProducer extends EventEmitter {
node.data,
{
...jobsOpts,
- ...node.opts,
+ ...opts,
parent: parent?.parentOpts,
- telemetryMetadata: dstPropagationMetadata,
+ telemetry,
},
jobId,
);
diff --git a/src/classes/job-scheduler.ts b/src/classes/job-scheduler.ts
index 63c0f8199c..743af75b72 100644
--- a/src/classes/job-scheduler.ts
+++ b/src/classes/job-scheduler.ts
@@ -1,21 +1,21 @@
import { parseExpression } from 'cron-parser';
-import { RedisClient, RepeatBaseOptions, RepeatOptions } from '../interfaces';
-import { JobsOptions, RepeatStrategy } from '../types';
+import {
+ JobSchedulerJson,
+ JobSchedulerTemplateJson,
+ RedisClient,
+ RepeatBaseOptions,
+ RepeatOptions,
+} from '../interfaces';
+import {
+ JobSchedulerTemplateOptions,
+ JobsOptions,
+ RepeatStrategy,
+} from '../types';
import { Job } from './job';
import { QueueBase } from './queue-base';
import { RedisConnection } from './redis-connection';
import { SpanKind, TelemetryAttributes } from '../enums';
-
-export interface JobSchedulerJson {
- key: string; // key is actually the job scheduler id
- name: string;
- id?: string | null;
- endDate: number | null;
- tz: string | null;
- pattern: string | null;
- every?: string | null;
- next?: number;
-}
+import { array2obj } from '../utils';
export class JobScheduler extends QueueBase {
private repeatStrategy: RepeatStrategy;
@@ -33,13 +33,13 @@ export class JobScheduler extends QueueBase {
async upsertJobScheduler(
jobSchedulerId: string,
- repeatOpts: Omit,
+ repeatOpts: Omit,
jobName: N,
jobData: T,
- opts: Omit,
- { override }: { override: boolean },
+ opts: JobSchedulerTemplateOptions,
+ { override, producerId }: { override: boolean; producerId?: string },
): Promise | undefined> {
- const { every, pattern } = repeatOpts;
+ const { every, pattern, offset } = repeatOpts;
if (pattern && every) {
throw new Error(
@@ -59,6 +59,12 @@ export class JobScheduler extends QueueBase {
);
}
+ if (repeatOpts.immediately && repeatOpts.every) {
+ console.warn(
+ "Using option immediately with every does not affect the job's schedule. Job will run immediately anyway.",
+ );
+ }
+
// Check if we reached the limit of the repeatable job's iterations
const iterationCount = repeatOpts.count ? repeatOpts.count + 1 : 1;
if (
@@ -75,8 +81,6 @@ export class JobScheduler extends QueueBase {
return;
}
- const prevMillis = opts.prevMillis || 0;
-
// Check if we have a start date for the repeatable job
const { startDate, immediately, ...filteredRepeatOpts } = repeatOpts;
if (startDate) {
@@ -84,15 +88,28 @@ export class JobScheduler extends QueueBase {
now = startMillis > now ? startMillis : now;
}
+ const prevMillis = opts.prevMillis || 0;
+ now = prevMillis < now ? now : prevMillis;
+
let nextMillis: number;
+ let newOffset = offset;
+
if (every) {
- nextMillis = prevMillis + every;
+ const nextSlot = Math.floor(now / every) * every + every;
+ if (prevMillis || offset) {
+ nextMillis = nextSlot + (offset || 0);
+ } else {
+ nextMillis = now;
+ newOffset = every - (nextSlot - now);
+
+ // newOffset should always be positive, but as an extra safety check
+ newOffset = newOffset < 0 ? 0 : newOffset;
+ }
if (nextMillis < now) {
nextMillis = now;
}
} else if (pattern) {
- now = prevMillis < now ? now : prevMillis;
nextMillis = await this.repeatStrategy(now, repeatOpts, jobName);
}
@@ -103,6 +120,8 @@ export class JobScheduler extends QueueBase {
(multi) as RedisClient,
jobSchedulerId,
nextMillis,
+ JSON.stringify(typeof jobData === 'undefined' ? {} : jobData),
+ Job.optsAsJSON(opts),
{
name: jobName,
endDate: endDate ? new Date(endDate).getTime() : undefined,
@@ -124,6 +143,22 @@ export class JobScheduler extends QueueBase {
'add',
`${this.name}.${jobName}`,
async (span, srcPropagationMedatada) => {
+ let telemetry = opts.telemetry;
+
+ if (srcPropagationMedatada) {
+ const omitContext = opts.telemetry?.omitContext;
+ const telemetryMetadata =
+ opts.telemetry?.metadata ||
+ (!omitContext && srcPropagationMedatada);
+
+ if (telemetryMetadata || omitContext) {
+ telemetry = {
+ metadata: telemetryMetadata,
+ omitContext,
+ };
+ }
+ }
+
const job = this.createNextJob(
(multi) as RedisClient,
jobName,
@@ -131,11 +166,12 @@ export class JobScheduler extends QueueBase {
jobSchedulerId,
{
...opts,
- repeat: filteredRepeatOpts,
- telemetryMetadata: srcPropagationMedatada,
+ repeat: { ...filteredRepeatOpts, offset: newOffset },
+ telemetry,
},
jobData,
iterationCount,
+ producerId,
);
const results = await multi.exec(); // multi.exec returns an array of results [ err, result ][]
@@ -171,6 +207,8 @@ export class JobScheduler extends QueueBase {
opts: JobsOptions,
data: T,
currentCount: number,
+ // The job id of the job that produced this next iteration
+ producerId?: string,
) {
//
// Generate unique job id for this iteration.
@@ -197,6 +235,11 @@ export class JobScheduler extends QueueBase {
const job = new this.Job(this, name, data, mergedOpts, jobId);
job.addJob(client);
+ if (producerId) {
+ const producerJobKey = this.toKey(producerId);
+ client.hset(producerJobKey, 'nrjid', job.id);
+ }
+
return job;
}
@@ -204,13 +247,21 @@ export class JobScheduler extends QueueBase {
return this.scripts.removeJobScheduler(jobSchedulerId);
}
- private async getSchedulerData(
+ private async getSchedulerData(
client: RedisClient,
key: string,
next?: number,
- ): Promise {
+ ): Promise> {
const jobData = await client.hgetall(this.toKey('repeat:' + key));
+ return this.transformSchedulerData(key, jobData, next);
+ }
+
+ private async transformSchedulerData(
+ key: string,
+ jobData: any,
+ next?: number,
+ ): Promise> {
if (jobData) {
return {
key,
@@ -219,6 +270,11 @@ export class JobScheduler extends QueueBase {
tz: jobData.tz || null,
pattern: jobData.pattern || null,
every: jobData.every || null,
+ ...(jobData.data || jobData.opts
+ ? {
+ template: this.getTemplateFromJSON(jobData.data, jobData.opts),
+ }
+ : {}),
next,
};
}
@@ -241,27 +297,35 @@ export class JobScheduler extends QueueBase {
};
}
- async getJobScheduler(id: string): Promise {
- const client = await this.client;
- const jobData = await client.hgetall(this.toKey('repeat:' + id));
+ async getScheduler(id: string): Promise> {
+ const [rawJobData, next] = await this.scripts.getJobScheduler(id);
- if (jobData) {
- return {
- key: id,
- name: jobData.name,
- endDate: parseInt(jobData.endDate) || null,
- tz: jobData.tz || null,
- pattern: jobData.pattern || null,
- every: jobData.every || null,
- };
+ return this.transformSchedulerData(
+ id,
+ rawJobData ? array2obj(rawJobData) : null,
+ next ? parseInt(next) : null,
+ );
+ }
+
+ private getTemplateFromJSON(
+ rawData?: string,
+ rawOpts?: string,
+ ): JobSchedulerTemplateJson {
+ const template: JobSchedulerTemplateJson = {};
+ if (rawData) {
+ template.data = JSON.parse(rawData);
+ }
+ if (rawOpts) {
+ template.opts = Job.optsFromJSON(rawOpts);
}
+ return template;
}
- async getJobSchedulers(
+ async getJobSchedulers(
start = 0,
end = -1,
asc = false,
- ): Promise {
+ ): Promise[]> {
const client = await this.client;
const jobSchedulersKey = this.keys.repeat;
@@ -272,18 +336,17 @@ export class JobScheduler extends QueueBase {
const jobs = [];
for (let i = 0; i < result.length; i += 2) {
jobs.push(
- this.getSchedulerData(client, result[i], parseInt(result[i + 1])),
+ this.getSchedulerData(client, result[i], parseInt(result[i + 1])),
);
}
return Promise.all(jobs);
}
- async getSchedulersCount(
- client: RedisClient,
- prefix: string,
- queueName: string,
- ): Promise {
- return client.zcard(`${prefix}:${queueName}:repeat`);
+ async getSchedulersCount(): Promise {
+ const jobSchedulersKey = this.keys.repeat;
+ const client = await this.client;
+
+ return client.zcard(jobSchedulersKey);
}
private getSchedulerNextJobId({
diff --git a/src/classes/job.ts b/src/classes/job.ts
index deb1a713b2..929c2deb96 100644
--- a/src/classes/job.ts
+++ b/src/classes/job.ts
@@ -20,16 +20,17 @@ import {
JobJsonSandbox,
MinimalQueue,
RedisJobOptions,
+ CompressableJobOptions,
} from '../types';
import {
errorObject,
- invertObject,
isEmpty,
getParentKey,
lengthInUtf8Bytes,
parseObjectValues,
tryCatch,
removeUndefinedFields,
+ invertObject,
} from '../utils';
import { Backoffs } from './backoffs';
import { Scripts, raw2NextJobData } from './scripts';
@@ -39,6 +40,7 @@ import { SpanKind } from '../enums';
const logger = debuglog('bull');
+// Simple options decode map.
const optsDecodeMap = {
de: 'deduplication',
fpof: 'failParentOnFailure',
@@ -46,8 +48,7 @@ const optsDecodeMap = {
kl: 'keepLogs',
ocf: 'onChildFailure',
rdof: 'removeDependencyOnFailure',
- tm: 'telemetryMetadata',
-};
+} as const;
const optsEncodeMap = invertObject(optsDecodeMap);
@@ -159,6 +160,12 @@ export class Job<
*/
repeatJobKey?: string;
+ /**
+ * Produced next repetable job Id.
+ *
+ */
+ nextRepeatableJobId?: string;
+
/**
* The token used for locking this job.
*/
@@ -373,6 +380,10 @@ export class Job<
job.processedBy = json.pb;
}
+ if (json.nrjid) {
+ job.nextRepeatableJobId = json.nrjid;
+ }
+
return job;
}
@@ -380,7 +391,7 @@ export class Job<
this.scripts = new Scripts(this.queue);
}
- private static optsFromJSON(rawOpts?: string): JobsOptions {
+ static optsFromJSON(rawOpts?: string): JobsOptions {
const opts = JSON.parse(rawOpts || '{}');
const optionEntries = Object.entries(opts) as Array<
@@ -394,7 +405,13 @@ export class Job<
options[(optsDecodeMap as Record)[attributeName]] =
value;
} else {
- options[attributeName] = value;
+ if (attributeName === 'tm') {
+ options.telemetry = { ...options.telemetry, metadata: value };
+ } else if (attributeName === 'omc') {
+ options.telemetry = { ...options.telemetry, omitContext: value };
+ } else {
+ options[attributeName] = value;
+ }
}
}
@@ -461,7 +478,7 @@ export class Job<
id: this.id,
name: this.name,
data: JSON.stringify(typeof this.data === 'undefined' ? {} : this.data),
- opts: removeUndefinedFields(this.optsAsJSON(this.opts)),
+ opts: Job.optsAsJSON(this.opts),
parent: this.parent ? { ...this.parent } : undefined,
parentKey: this.parentKey,
progress: this.progress,
@@ -475,24 +492,38 @@ export class Job<
deduplicationId: this.deduplicationId,
repeatJobKey: this.repeatJobKey,
returnvalue: JSON.stringify(this.returnvalue),
+ nrjid: this.nextRepeatableJobId,
});
}
- private optsAsJSON(opts: JobsOptions = {}): RedisJobOptions {
+ static optsAsJSON(opts: JobsOptions = {}): RedisJobOptions {
const optionEntries = Object.entries(opts) as Array<
[keyof JobsOptions, any]
>;
- const options: Partial> = {};
- for (const item of optionEntries) {
- const [attributeName, value] = item;
- if ((optsEncodeMap as Record)[attributeName]) {
- options[(optsEncodeMap as Record)[attributeName]] =
- value;
+ const options: Record = {};
+
+ for (const [attributeName, value] of optionEntries) {
+ if (typeof value === 'undefined') {
+ continue;
+ }
+ if (attributeName in optsEncodeMap) {
+ const compressableAttribute = attributeName as keyof Omit<
+ CompressableJobOptions,
+ 'debounce' | 'telemetry'
+ >;
+
+ const key = optsEncodeMap[compressableAttribute];
+ options[key] = value;
} else {
- options[attributeName] = value;
+ // Handle complex compressable fields separately
+ if (attributeName === 'telemetry') {
+ options.tm = value.metadata;
+ options.omc = value.omitContext;
+ } else {
+ options[attributeName] = value;
+ }
}
}
-
return options as RedisJobOptions;
}
@@ -685,83 +716,68 @@ export class Job<
token: string,
fetchNext = false,
): Promise {
- const client = await this.queue.client;
- const message = err?.message;
-
- this.failedReason = message;
-
- let command: string;
- const multi = client.multi();
-
- this.saveStacktrace(multi, err);
-
- //
- // Check if an automatic retry should be performed
- //
- let finishedOn: number;
- const [shouldRetry, retryDelay] = await this.shouldRetryJob(err);
- if (shouldRetry) {
- if (retryDelay) {
- const args = this.scripts.moveToDelayedArgs(
- this.id,
- Date.now(),
- token,
- retryDelay,
- );
- this.scripts.execCommand(multi, 'moveToDelayed', args);
- command = 'moveToDelayed';
- } else {
- // Retry immediately
- this.scripts.execCommand(
- multi,
- 'retryJob',
- this.scripts.retryJobArgs(this.id, this.opts.lifo, token),
- );
- command = 'retryJob';
- }
- } else {
- const args = this.scripts.moveToFailedArgs(
- this,
- message,
- this.opts.removeOnFail,
- token,
- fetchNext,
- );
-
- this.scripts.execCommand(multi, 'moveToFinished', args);
- finishedOn = args[this.scripts.moveToFinishedKeys.length + 1] as number;
- command = 'moveToFinished';
- }
+ this.failedReason = err?.message;
return this.queue.trace>(
SpanKind.INTERNAL,
- this.getSpanOperation(command),
+ this.getSpanOperation('moveToFailed'),
this.queue.name,
async (span, dstPropagationMedatadata) => {
- if (dstPropagationMedatadata) {
- this.scripts.execCommand(multi, 'updateJobOption', [
- this.toKey(this.id),
- 'tm',
- dstPropagationMedatadata,
- ]);
+ let tm;
+ if (!this.opts?.telemetry?.omitContext && dstPropagationMedatadata) {
+ tm = dstPropagationMedatadata;
}
-
- const results = await multi.exec();
- const anyError = results.find(result => result[0]);
- if (anyError) {
- throw new Error(
- `Error "moveToFailed" with command ${command}: ${anyError}`,
+ let result;
+
+ this.updateStacktrace(err);
+
+ const fieldsToUpdate = {
+ failedReason: this.failedReason,
+ stacktrace: JSON.stringify(this.stacktrace),
+ tm,
+ };
+
+ //
+ // Check if an automatic retry should be performed
+ //
+ let finishedOn: number;
+ const [shouldRetry, retryDelay] = await this.shouldRetryJob(err);
+
+ if (shouldRetry) {
+ if (retryDelay) {
+ // Retry with delay
+ result = await this.scripts.moveToDelayed(
+ this.id,
+ Date.now(),
+ retryDelay,
+ token,
+ { fieldsToUpdate },
+ );
+ } else {
+ // Retry immediately
+ result = await this.scripts.retryJob(
+ this.id,
+ this.opts.lifo,
+ token,
+ {
+ fieldsToUpdate,
+ },
+ );
+ }
+ } else {
+ const args = this.scripts.moveToFailedArgs(
+ this,
+ this.failedReason,
+ this.opts.removeOnFail,
+ token,
+ fetchNext,
+ fieldsToUpdate,
);
- }
- const result = results[results.length - 1][1] as number;
- if (result < 0) {
- throw this.scripts.finishedErrors({
- code: result,
- jobId: this.id,
- command,
- state: 'active',
- });
+ result = await this.scripts.moveToFinished(this.id, args);
+ finishedOn = args[
+ this.scripts.moveToFinishedKeys.length + 1
+ ] as number;
}
if (finishedOn && typeof finishedOn === 'number') {
@@ -774,9 +790,7 @@ export class Job<
this.attemptsMade += 1;
- if (Array.isArray(result)) {
- return raw2NextJobData(result);
- }
+ return result;
},
);
}
@@ -1133,7 +1147,7 @@ export class Job<
/**
* Moves the job to the delay set.
*
- * @param timestamp - timestamp where the job should be moved back to "wait"
+ * @param timestamp - timestamp when the job should be moved back to "wait"
* @param token - token to check job is locked by current worker
* @returns
*/
@@ -1270,7 +1284,7 @@ export class Job<
}
}
- protected saveStacktrace(multi: ChainableCommander, err: Error): void {
+ protected updateStacktrace(err: Error) {
this.stacktrace = this.stacktrace || [];
if (err?.stack) {
@@ -1281,14 +1295,6 @@ export class Job<
this.stacktrace = this.stacktrace.slice(-this.opts.stackTraceLimit);
}
}
-
- const args = this.scripts.saveStacktraceArgs(
- this.id,
- JSON.stringify(this.stacktrace),
- err?.message,
- );
-
- this.scripts.execCommand(multi, 'saveStacktrace', args);
}
}
diff --git a/src/classes/queue-base.ts b/src/classes/queue-base.ts
index fc9e738f2c..599f5bfd4d 100644
--- a/src/classes/queue-base.ts
+++ b/src/classes/queue-base.ts
@@ -1,6 +1,7 @@
import { EventEmitter } from 'events';
-import { QueueBaseOptions, RedisClient, Span, Tracer } from '../interfaces';
+import { QueueBaseOptions, RedisClient, Span } from '../interfaces';
import { MinimalQueue } from '../types';
+
import {
delay,
DELAY_TIME_5,
diff --git a/src/classes/queue.ts b/src/classes/queue.ts
index 67ecf8fb78..e93070aa01 100644
--- a/src/classes/queue.ts
+++ b/src/classes/queue.ts
@@ -3,11 +3,17 @@ import {
BaseJobOptions,
BulkJobOptions,
IoredisListener,
+ JobSchedulerJson,
QueueOptions,
RepeatableJob,
RepeatOptions,
} from '../interfaces';
-import { FinishedStatus, JobsOptions, MinimalQueue } from '../types';
+import {
+ FinishedStatus,
+ JobsOptions,
+ JobSchedulerTemplateOptions,
+ MinimalQueue,
+} from '../types';
import { Job } from './job';
import { QueueGetters } from './queue-getters';
import { Repeat } from './repeat';
@@ -306,8 +312,11 @@ export class Queue<
'add',
`${this.name}.${name}`,
async (span, srcPropagationMedatada) => {
- if (srcPropagationMedatada) {
- opts = { ...opts, telemetryMetadata: srcPropagationMedatada };
+ if (srcPropagationMedatada && !opts?.telemetry?.omitContext) {
+ const telemetry = {
+ metadata: srcPropagationMedatada,
+ };
+ opts = { ...opts, telemetry };
}
const job = await this.addJob(name, data, opts);
@@ -396,16 +405,33 @@ export class Queue<
return await this.Job.createBulk(
this as MinimalQueue,
- jobs.map(job => ({
- name: job.name,
- data: job.data,
- opts: {
- ...this.jobsOpts,
- ...job.opts,
- jobId: job.opts?.jobId,
- tm: span && srcPropagationMedatada,
- },
- })),
+ jobs.map(job => {
+ let telemetry = job.opts?.telemetry;
+ if (srcPropagationMedatada) {
+ const omitContext = job.opts?.telemetry?.omitContext;
+ const telemetryMetadata =
+ job.opts?.telemetry?.metadata ||
+ (!omitContext && srcPropagationMedatada);
+
+ if (telemetryMetadata || omitContext) {
+ telemetry = {
+ metadata: telemetryMetadata,
+ omitContext,
+ };
+ }
+ }
+
+ return {
+ name: job.name,
+ data: job.data,
+ opts: {
+ ...this.jobsOpts,
+ ...job.opts,
+ jobId: job.opts?.jobId,
+ telemetry,
+ },
+ };
+ }),
);
},
);
@@ -431,7 +457,7 @@ export class Queue<
jobTemplate?: {
name?: NameType;
data?: DataType;
- opts?: Omit;
+ opts?: JobSchedulerTemplateOptions;
},
) {
if (repeatOpts.endDate) {
@@ -570,8 +596,8 @@ export class Queue<
*
* @param id - identifier of scheduler.
*/
- async getJobScheduler(id: string): Promise {
- return (await this.jobScheduler).getJobScheduler(id);
+ async getJobScheduler(id: string): Promise> {
+ return (await this.jobScheduler).getScheduler(id);
}
/**
@@ -586,8 +612,22 @@ export class Queue<
start?: number,
end?: number,
asc?: boolean,
- ): Promise {
- return (await this.jobScheduler).getJobSchedulers(start, end, asc);
+ ): Promise[]> {
+ return (await this.jobScheduler).getJobSchedulers(
+ start,
+ end,
+ asc,
+ );
+ }
+
+ /**
+ *
+ * Get the number of job schedulers.
+ *
+ * @returns The number of job schedulers.
+ */
+ async getJobSchedulersCount(): Promise {
+ return (await this.jobScheduler).getSchedulersCount();
}
/**
diff --git a/src/classes/scripts.ts b/src/classes/scripts.ts
index 9d8d9de1d5..254abf5952 100644
--- a/src/classes/scripts.ts
+++ b/src/classes/scripts.ts
@@ -34,7 +34,7 @@ import {
RedisJobOptions,
} from '../types';
import { ErrorCode } from '../enums';
-import { array2obj, getParentKey, isVersionLowerThan } from '../utils';
+import { array2obj, getParentKey, isVersionLowerThan, objectToFlatArray } from '../utils';
import { ChainableCommander } from 'ioredis';
import { version as packageVersion } from '../version';
export type JobData = [JobJsonRaw | number, string?];
@@ -316,6 +316,8 @@ export class Scripts {
client: RedisClient,
jobSchedulerId: string,
nextMillis: number,
+ templateData: string,
+ templateOpts: RedisJobOptions,
opts: RepeatableOptions,
): Promise {
const queueKeys = this.queue.keys;
@@ -324,7 +326,15 @@ export class Scripts {
queueKeys.repeat,
queueKeys.delayed,
];
- const args = [nextMillis, pack(opts), jobSchedulerId, queueKeys['']];
+
+ const args = [
+ nextMillis,
+ pack(opts),
+ jobSchedulerId,
+ templateData,
+ pack(templateOpts),
+ queueKeys[''],
+ ];
return this.execCommand(client, 'addJobScheduler', keys.concat(args));
}
@@ -452,6 +462,23 @@ export class Scripts {
return this.execCommand(client, 'extendLock', args);
}
+ async extendLocks(
+ jobIds: string[],
+ tokens: string[],
+ duration: number,
+ ): Promise {
+ const client = await this.queue.client;
+
+ const args = [
+ this.queue.keys.stalled,
+ this.queue.toKey(''),
+ pack(tokens),
+ pack(jobIds),
+ duration,
+ ];
+ return this.execCommand(client, 'extendLocks', args);
+ }
+
async updateData(
job: MinimalJob,
data: T,
@@ -542,6 +569,7 @@ export class Scripts {
token: string,
timestamp: number,
fetchNext = true,
+ fieldsToUpdate?: Record,
): (string | number | boolean | Buffer)[] {
const queueKeys = this.queue.keys;
const opts: WorkerOptions = this.queue.opts;
@@ -576,6 +604,7 @@ export class Scripts {
? opts.metrics?.maxDataPoints
: '',
}),
+ fieldsToUpdate ? pack(objectToFlatArray(fieldsToUpdate)) : void 0,
];
return keys.concat(args);
@@ -779,6 +808,7 @@ export class Scripts {
removeOnFailed: boolean | number | KeepJobs,
token: string,
fetchNext = false,
+ fieldsToUpdate?: Record,
): (string | number | boolean | Buffer)[] {
const timestamp = Date.now();
return this.moveToFinishedArgs(
@@ -790,6 +820,7 @@ export class Scripts {
token,
timestamp,
fetchNext,
+ fieldsToUpdate,
);
}
@@ -907,9 +938,9 @@ export class Scripts {
token: string,
delay: number,
opts: MoveToDelayedOpts = {},
- ): (string | number)[] {
+ ): (string | number | Buffer)[] {
const queueKeys = this.queue.keys;
- const keys: (string | number)[] = [
+ const keys: (string | number | Buffer)[] = [
queueKeys.marker,
queueKeys.active,
queueKeys.prioritized,
@@ -927,19 +958,12 @@ export class Scripts {
token,
delay,
opts.skipAttempt ? '1' : '0',
+ opts.fieldsToUpdate
+ ? pack(objectToFlatArray(opts.fieldsToUpdate))
+ : void 0,
]);
}
- saveStacktraceArgs(
- jobId: string,
- stacktrace: string,
- failedReason: string,
- ): string[] {
- const keys: string[] = [this.queue.toKey(jobId)];
-
- return keys.concat([stacktrace, failedReason]);
- }
-
moveToWaitingChildrenArgs(
jobId: string,
token: string,
@@ -1079,12 +1103,27 @@ export class Scripts {
]);
}
+ getJobSchedulerArgs(id: string): string[] {
+ const keys: string[] = [this.queue.keys.repeat];
+
+ return keys.concat([id]);
+ }
+
+ async getJobScheduler(id: string): Promise<[any, string | null]> {
+ const client = await this.queue.client;
+
+ const args = this.getJobSchedulerArgs(id);
+
+ return this.execCommand(client, 'getJobScheduler', args);
+ }
+
retryJobArgs(
jobId: string,
lifo: boolean,
token: string,
- ): (string | number)[] {
- const keys: (string | number)[] = [
+ fieldsToUpdate?: Record,
+ ): (string | number | Buffer)[] {
+ const keys: (string | number | Buffer)[] = [
this.queue.keys.active,
this.queue.keys.wait,
this.queue.toKey(jobId),
@@ -1105,9 +1144,30 @@ export class Scripts {
pushCmd,
jobId,
token,
+ fieldsToUpdate ? pack(objectToFlatArray(fieldsToUpdate)) : void 0,
]);
}
+ async retryJob(
+ jobId: string,
+ lifo: boolean,
+ token: string,
+ fieldsToUpdate?: Record,
+ ): Promise {
+ const client = await this.queue.client;
+
+ const args = this.retryJobArgs(jobId, lifo, token, fieldsToUpdate);
+ const result = await this.execCommand(client, 'retryJob', args);
+ if (result < 0) {
+ throw this.finishedErrors({
+ code: result,
+ jobId,
+ command: 'retryJob',
+ state: 'active',
+ });
+ }
+ }
+
protected moveJobsToWaitArgs(
state: FinishedStatus | 'delayed',
count: number,
diff --git a/src/classes/worker.ts b/src/classes/worker.ts
index 8839c1ea1a..d03d07b2bd 100644
--- a/src/classes/worker.ts
+++ b/src/classes/worker.ts
@@ -191,7 +191,7 @@ export class Worker<
private waiting: Promise | null = null;
private _repeat: Repeat; // To be deprecated in v6 in favor of Job Scheduler
- private _jobScheduler: JobScheduler;
+ protected _jobScheduler: JobScheduler;
protected paused: Promise;
protected processFn: Processor;
@@ -564,7 +564,7 @@ export class Worker<
return nextJob;
},
- nextJob?.opts.telemetryMetadata,
+ nextJob?.opts?.telemetry?.metadata,
);
}
@@ -592,10 +592,10 @@ export class Worker<
this.blockUntil = await this.waiting;
if (this.blockUntil <= 0 || this.blockUntil - Date.now() < 1) {
- return this.moveToActive(client, token, this.opts.name);
+ return await this.moveToActive(client, token, this.opts.name);
}
} catch (err) {
- // Swallow error if locally paused or closing since we did force a disconnection
+ // Swallow error if locally not paused or not closing since we did not force a disconnection
if (
!(this.paused || this.closing) &&
isNotConnectionError(err)
@@ -755,7 +755,7 @@ will never work with more accuracy than 1ms. */
job.token = token;
// Add next scheduled job if necessary.
- if (job.opts.repeat) {
+ if (job.opts.repeat && !job.nextRepeatableJobId) {
// Use new job scheduler if possible
if (job.repeatJobKey) {
const jobScheduler = await this.jobScheduler;
@@ -765,7 +765,7 @@ will never work with more accuracy than 1ms. */
job.name,
job.data,
job.opts,
- { override: false },
+ { override: false, producerId: job.id },
);
} else {
const repeat = await this.repeat;
@@ -788,7 +788,7 @@ will never work with more accuracy than 1ms. */
return;
}
- const { telemetryMetadata: srcPropagationMedatada } = job.opts;
+ const srcPropagationMedatada = job.opts?.telemetry?.metadata;
return this.trace>(
SpanKind.CONSUMER,
@@ -802,6 +802,8 @@ will never work with more accuracy than 1ms. */
});
const handleCompleted = async (result: ResultType) => {
+ jobsInProgress.delete(inProgressItem);
+
if (!this.connection.closing) {
const completed = await job.moveToCompleted(
result,
@@ -822,6 +824,8 @@ will never work with more accuracy than 1ms. */
};
const handleFailed = async (err: Error) => {
+ jobsInProgress.delete(inProgressItem);
+
if (!this.connection.closing) {
try {
// Check if the job was manually rate-limited
@@ -878,8 +882,6 @@ will never work with more accuracy than 1ms. */
[TelemetryAttributes.JobFinishedTimestamp]: Date.now(),
[TelemetryAttributes.JobProcessedTimestamp]: processedOn,
});
-
- jobsInProgress.delete(inProgressItem);
}
},
srcPropagationMedatada,
@@ -1009,7 +1011,6 @@ will never work with more accuracy than 1ms. */
}
clearTimeout(this.extendLocksTimer);
- //clearTimeout(this.stalledCheckTimer);
this.stalledCheckStopper?.();
this.closed = true;
@@ -1166,26 +1167,20 @@ will never work with more accuracy than 1ms. */
});
try {
- const pipeline = (await this.client).pipeline();
- for (const job of jobs) {
- await this.scripts.extendLock(
- job.id,
- job.token,
- this.opts.lockDuration,
- pipeline,
+ const erroredJobIds = await this.scripts.extendLocks(
+ jobs.map(job => job.id),
+ jobs.map(job => job.token),
+ this.opts.lockDuration,
+ );
+
+ for (const jobId of erroredJobIds) {
+ // TODO: Send signal to process function that the job has been lost.
+
+ this.emit(
+ 'error',
+ new Error(`could not renew lock for job ${jobId}`),
);
}
- const result = (await pipeline.exec()) as [Error, string][];
-
- for (const [err, jobId] of result) {
- if (err) {
- // TODO: signal process function that the job has been lost.
- this.emit(
- 'error',
- new Error(`could not renew lock for job ${jobId}`),
- );
- }
- }
} catch (err) {
this.emit('error', err);
}
@@ -1216,6 +1211,7 @@ will never work with more accuracy than 1ms. */
this.emit('stalled', jobId, 'active');
});
+ // Todo: check if there any listeners on failed event
const jobPromises: Promise>[] = [];
for (let i = 0; i < failed.length; i++) {
jobPromises.push(
diff --git a/src/commands/addJobScheduler-2.lua b/src/commands/addJobScheduler-2.lua
index 687776f288..454402b283 100644
--- a/src/commands/addJobScheduler-2.lua
+++ b/src/commands/addJobScheduler-2.lua
@@ -13,7 +13,9 @@
[4] endDate?
[5] every?
ARGV[3] jobs scheduler id
- ARGV[4] prefix key
+ ARGV[4] Json stringified template data
+ ARGV[5] mspacked template opts
+ ARGV[6] prefix key
Output:
repeatableKey - OK
@@ -24,13 +26,14 @@ local delayedKey = KEYS[2]
local nextMillis = ARGV[1]
local jobSchedulerId = ARGV[3]
-local prefixKey = ARGV[4]
+local templateOpts = cmsgpack.unpack(ARGV[5])
+local prefixKey = ARGV[6]
-- Includes
--- @include "includes/removeJob"
-local function storeRepeatableJob(repeatKey, nextMillis, rawOpts)
- rcall("ZADD", repeatKey, nextMillis, jobSchedulerId)
+local function storeRepeatableJob(schedulerId, repeatKey, nextMillis, rawOpts, templateData, templateOpts)
+ rcall("ZADD", repeatKey, nextMillis, schedulerId)
local opts = cmsgpack.unpack(rawOpts)
local optionalValues = {}
@@ -54,7 +57,18 @@ local function storeRepeatableJob(repeatKey, nextMillis, rawOpts)
table.insert(optionalValues, opts['every'])
end
- rcall("HMSET", repeatKey .. ":" .. jobSchedulerId, "name", opts['name'],
+ local jsonTemplateOpts = cjson.encode(templateOpts)
+ if jsonTemplateOpts and jsonTemplateOpts ~= '{}' then
+ table.insert(optionalValues, "opts")
+ table.insert(optionalValues, jsonTemplateOpts)
+ end
+
+ if templateData and templateData ~= '{}' then
+ table.insert(optionalValues, "data")
+ table.insert(optionalValues, templateData)
+ end
+
+ rcall("HMSET", repeatKey .. ":" .. schedulerId, "name", opts['name'],
unpack(optionalValues))
end
@@ -63,13 +77,15 @@ end
local prevMillis = rcall("ZSCORE", repeatKey, jobSchedulerId)
if prevMillis ~= false then
local delayedJobId = "repeat:" .. jobSchedulerId .. ":" .. prevMillis
- local nextDelayedJobId = repeatKey .. ":" .. jobSchedulerId .. ":" .. nextMillis
+ local nextDelayedJobId = "repeat:" .. jobSchedulerId .. ":" .. nextMillis
+ local nextDelayedJobKey = repeatKey .. ":" .. jobSchedulerId .. ":" .. nextMillis
if rcall("ZSCORE", delayedKey, delayedJobId) ~= false
- and rcall("EXISTS", nextDelayedJobId) ~= 1 then
+ and (rcall("EXISTS", nextDelayedJobKey) ~= 1
+ or delayedJobId == nextDelayedJobId) then
removeJob(delayedJobId, true, prefixKey, true --[[remove deduplication key]])
rcall("ZREM", delayedKey, delayedJobId)
end
end
-return storeRepeatableJob(repeatKey, nextMillis, ARGV[2])
+return storeRepeatableJob(jobSchedulerId, repeatKey, nextMillis, ARGV[2], ARGV[4], templateOpts)
diff --git a/src/commands/addParentJob-4.lua b/src/commands/addParentJob-4.lua
index 0840037e5f..bde421fca8 100644
--- a/src/commands/addParentJob-4.lua
+++ b/src/commands/addParentJob-4.lua
@@ -3,13 +3,13 @@
- Increases the job counter if needed.
- Creates a new job key with the job data.
- adds the job to the waiting-children zset
-
+
Input:
KEYS[1] 'meta'
KEYS[2] 'id'
KEYS[3] 'completed'
KEYS[4] events stream key
-
+
ARGV[1] msgpacked arguments array
[1] key prefix,
[2] custom id (will not generate one automatically)
@@ -21,7 +21,7 @@
[8] parent? {id, queueKey}
[9] repeat job key
[10] deduplication key
-
+
ARGV[2] Json stringified job data
ARGV[3] msgpacked options
diff --git a/src/commands/extendLocks-1.lua b/src/commands/extendLocks-1.lua
new file mode 100644
index 0000000000..2756cfe2c6
--- /dev/null
+++ b/src/commands/extendLocks-1.lua
@@ -0,0 +1,44 @@
+--[[
+ Extend locks for multiple jobs and remove them from the stalled set if successful.
+ Return the list of job IDs for which the operation failed.
+
+ KEYS[1] = stalledKey
+
+ ARGV[1] = baseKey
+ ARGV[2] = tokens
+ ARGV[3] = jobIds
+ ARGV[4] = lockDuration (ms)
+
+ Output:
+ An array of failed job IDs. If empty, all succeeded.
+]]
+local rcall = redis.call
+
+local stalledKey = KEYS[1]
+local baseKey = ARGV[1]
+local tokens = cmsgpack.unpack(ARGV[2])
+local jobIds = cmsgpack.unpack(ARGV[3])
+local lockDuration = ARGV[4]
+
+local jobCount = #jobIds
+local failedJobs = {}
+
+for i = 1, jobCount, 1 do
+ local lockKey = baseKey .. jobIds[i] .. ':lock'
+ local jobId = jobIds[i]
+ local token = tokens[i]
+
+ local currentToken = rcall("GET", lockKey)
+ if currentToken == token then
+ local setResult = rcall("SET", lockKey, token, "PX", lockDuration)
+ if setResult then
+ rcall("SREM", stalledKey, jobId)
+ else
+ table.insert(failedJobs, jobId)
+ end
+ else
+ table.insert(failedJobs, jobId)
+ end
+end
+
+return failedJobs
diff --git a/src/commands/getJobScheduler-1.lua b/src/commands/getJobScheduler-1.lua
new file mode 100644
index 0000000000..324bdb58eb
--- /dev/null
+++ b/src/commands/getJobScheduler-1.lua
@@ -0,0 +1,19 @@
+--[[
+ Get job scheduler record.
+
+ Input:
+ KEYS[1] 'repeat' key
+
+ ARGV[1] id
+]]
+
+local rcall = redis.call
+local jobSchedulerKey = KEYS[1] .. ":" .. ARGV[1]
+
+local score = rcall("ZSCORE", KEYS[1], ARGV[1])
+
+if score then
+ return {rcall("HGETALL", jobSchedulerKey), score} -- get job data
+end
+
+return {nil, nil}
diff --git a/src/commands/includes/moveParentFromWaitingChildrenToFailed.lua b/src/commands/includes/moveParentFromWaitingChildrenToFailed.lua
index bf54b8ce68..4dae58c765 100644
--- a/src/commands/includes/moveParentFromWaitingChildrenToFailed.lua
+++ b/src/commands/includes/moveParentFromWaitingChildrenToFailed.lua
@@ -5,19 +5,27 @@
-- Includes
--- @include "removeDeduplicationKeyIfNeeded"
+--- @include "removeJobsOnFail"
local function moveParentFromWaitingChildrenToFailed( parentQueueKey, parentKey, parentId, jobIdKey, timestamp)
if rcall("ZREM", parentQueueKey .. ":waiting-children", parentId) == 1 then
- rcall("ZADD", parentQueueKey .. ":failed", timestamp, parentId)
+ local parentQueuePrefix = parentQueueKey .. ":"
+ local parentFailedKey = parentQueueKey .. ":failed"
+ rcall("ZADD", parentFailedKey, timestamp, parentId)
local failedReason = "child " .. jobIdKey .. " failed"
rcall("HMSET", parentKey, "failedReason", failedReason, "finishedOn", timestamp)
rcall("XADD", parentQueueKey .. ":events", "*", "event", "failed", "jobId", parentId, "failedReason",
failedReason, "prev", "waiting-children")
- local jobAttributes = rcall("HMGET", parentKey, "parent", "deid")
+ local jobAttributes = rcall("HMGET", parentKey, "parent", "deid", "opts")
removeDeduplicationKeyIfNeeded(parentQueueKey .. ":", jobAttributes[2])
+ local parentRawOpts = jobAttributes[3]
+ local parentOpts = cjson.decode(parentRawOpts)
+
+ removeJobsOnFail(parentQueuePrefix, parentFailedKey, parentId, parentOpts, timestamp)
+
if jobAttributes[1] then
return cjson.decode(jobAttributes[1]), failedReason
end
diff --git a/src/commands/includes/removeJobsOnFail.lua b/src/commands/includes/removeJobsOnFail.lua
new file mode 100644
index 0000000000..a7fa14aad9
--- /dev/null
+++ b/src/commands/includes/removeJobsOnFail.lua
@@ -0,0 +1,36 @@
+--[[
+ Functions to remove jobs when removeOnFail option is provided.
+]]
+
+-- Includes
+--- @include "removeJob"
+--- @include "removeJobsByMaxAge"
+--- @include "removeJobsByMaxCount"
+
+local function removeJobsOnFail(queueKeyPrefix, failedKey, jobId, opts, timestamp)
+ local removeOnFailType = type(opts["removeOnFail"])
+ if removeOnFailType == "number" then
+ removeJobsByMaxCount(opts["removeOnFail"],
+ failedKey, queueKeyPrefix)
+ elseif removeOnFailType == "boolean" then
+ if opts["removeOnFail"] then
+ removeJob(jobId, false, queueKeyPrefix,
+ false --[[remove debounce key]])
+ rcall("ZREM", failedKey, jobId)
+ end
+ elseif removeOnFailType ~= "nil" then
+ local maxAge = opts["removeOnFail"]["age"]
+ local maxCount = opts["removeOnFail"]["count"]
+
+ if maxAge ~= nil then
+ removeJobsByMaxAge(timestamp, maxAge,
+ failedKey, queueKeyPrefix)
+ end
+
+ if maxCount ~= nil and maxCount > 0 then
+ removeJobsByMaxCount(maxCount, failedKey,
+ queueKeyPrefix)
+ end
+ end
+end
+
\ No newline at end of file
diff --git a/src/commands/includes/updateJobFields.lua b/src/commands/includes/updateJobFields.lua
new file mode 100644
index 0000000000..d2c5f2944d
--- /dev/null
+++ b/src/commands/includes/updateJobFields.lua
@@ -0,0 +1,11 @@
+--[[
+ Function to update a bunch of fields in a job.
+]]
+local function updateJobFields(jobKey, msgpackedFields)
+ if msgpackedFields and #msgpackedFields > 0 then
+ local fieldsToUpdate = cmsgpack.unpack(msgpackedFields)
+ if fieldsToUpdate then
+ redis.call("HMSET", jobKey, unpack(fieldsToUpdate))
+ end
+ end
+end
diff --git a/src/commands/moveStalledJobsToWait-8.lua b/src/commands/moveStalledJobsToWait-8.lua
index c6e82aa03a..218e33da0e 100644
--- a/src/commands/moveStalledJobsToWait-8.lua
+++ b/src/commands/moveStalledJobsToWait-8.lua
@@ -28,9 +28,7 @@ local rcall = redis.call
--- @include "includes/isQueuePausedOrMaxed"
--- @include "includes/moveParentIfNeeded"
--- @include "includes/removeDeduplicationKeyIfNeeded"
---- @include "includes/removeJob"
---- @include "includes/removeJobsByMaxAge"
---- @include "includes/removeJobsByMaxCount"
+--- @include "includes/removeJobsOnFail"
--- @include "includes/trimEvents"
local stalledKey = KEYS[1]
@@ -96,29 +94,7 @@ if (#stalling > 0) then
failedReason, timestamp)
end
- if removeOnFailType == "number" then
- removeJobsByMaxCount(opts["removeOnFail"],
- failedKey, queueKeyPrefix)
- elseif removeOnFailType == "boolean" then
- if opts["removeOnFail"] then
- removeJob(jobId, false, queueKeyPrefix,
- false --[[remove deduplication key]])
- rcall("ZREM", failedKey, jobId)
- end
- elseif removeOnFailType ~= "nil" then
- local maxAge = opts["removeOnFail"]["age"]
- local maxCount = opts["removeOnFail"]["count"]
-
- if maxAge ~= nil then
- removeJobsByMaxAge(timestamp, maxAge,
- failedKey, queueKeyPrefix)
- end
-
- if maxCount ~= nil and maxCount > 0 then
- removeJobsByMaxCount(maxCount, failedKey,
- queueKeyPrefix)
- end
- end
+ removeJobsOnFail(queueKeyPrefix, failedKey, jobId, opts, timestamp)
table.insert(failed, jobId)
else
diff --git a/src/commands/moveToDelayed-8.lua b/src/commands/moveToDelayed-8.lua
index e48d3f235d..3237093b1a 100644
--- a/src/commands/moveToDelayed-8.lua
+++ b/src/commands/moveToDelayed-8.lua
@@ -17,6 +17,7 @@
ARGV[4] queue token
ARGV[5] delay value
ARGV[6] skip attempt
+ ARGV[7] optional job fields to update
Output:
0 - OK
@@ -33,6 +34,7 @@ local rcall = redis.call
--- @include "includes/getDelayedScore"
--- @include "includes/getOrSetMaxEvents"
--- @include "includes/removeLock"
+--- @include "includes/updateJobFields"
local jobKey = KEYS[5]
local metaKey = KEYS[7]
@@ -43,6 +45,8 @@ if rcall("EXISTS", jobKey) == 1 then
return errorCode
end
+ updateJobFields(jobKey, ARGV[7])
+
local delayedKey = KEYS[4]
local jobId = ARGV[3]
local delay = tonumber(ARGV[5])
diff --git a/src/commands/moveToFinished-13.lua b/src/commands/moveToFinished-13.lua
index 2694136f09..652476611c 100644
--- a/src/commands/moveToFinished-13.lua
+++ b/src/commands/moveToFinished-13.lua
@@ -31,6 +31,7 @@
ARGV[6] fetch next?
ARGV[7] keys prefix
ARGV[8] opts
+ ARGV[9] job fields to update
opts - token - lock token
opts - keepJobs
@@ -68,6 +69,7 @@ local rcall = redis.call
--- @include "includes/removeParentDependencyKey"
--- @include "includes/trimEvents"
--- @include "includes/updateParentDepsIfNeeded"
+--- @include "includes/updateJobFields"
local jobIdKey = KEYS[11]
if rcall("EXISTS", jobIdKey) == 1 then -- // Make sure job exists
@@ -80,6 +82,8 @@ if rcall("EXISTS", jobIdKey) == 1 then -- // Make sure job exists
return errorCode
end
+ updateJobFields(jobIdKey, ARGV[9]);
+
local attempts = opts['attempts']
local maxMetricsSize = opts['maxMetricsSize']
local maxCount = opts['keepJobs']['count']
diff --git a/src/commands/saveStacktrace-1.lua b/src/commands/saveStacktrace-1.lua
index 216e295a59..52c91b6091 100644
--- a/src/commands/saveStacktrace-1.lua
+++ b/src/commands/saveStacktrace-1.lua
@@ -1,12 +1,9 @@
--[[
Save stacktrace and failedReason.
-
Input:
KEYS[1] job key
-
ARGV[1] stacktrace
ARGV[2] failedReason
-
Output:
0 - OK
-1 - Missing key
diff --git a/src/commands/updateJobOption-1.lua b/src/commands/updateJobOption-1.lua
deleted file mode 100644
index 03949faf29..0000000000
--- a/src/commands/updateJobOption-1.lua
+++ /dev/null
@@ -1,26 +0,0 @@
---[[
- Update a job option
-
- Input:
- KEYS[1] Job id key
-
- ARGV[1] field
- ARGV[2] value
-
- Output:
- 0 - OK
- -1 - Missing job.
-]]
-local rcall = redis.call
-
-if rcall("EXISTS", KEYS[1]) == 1 then -- // Make sure job exists
-
- local opts = rcall("HGET", KEYS[1], "opts")
- local jsonOpts = cjson.decode(opts)
- jsonOpts[ARGV[1]] = ARGV[2]
-
- rcall("HSET", KEYS[1], "opts", cjson.encode(jsonOpts))
- return 0
-else
- return -1
-end
diff --git a/src/interfaces/base-job-options.ts b/src/interfaces/base-job-options.ts
index bb10f1caa3..7ad62439cc 100644
--- a/src/interfaces/base-job-options.ts
+++ b/src/interfaces/base-job-options.ts
@@ -1,4 +1,6 @@
-import { RepeatOptions, KeepJobs, BackoffOptions } from './';
+import { BackoffOptions } from './backoff-options';
+import { KeepJobs } from './keep-jobs';
+import { RepeatOptions } from './repeat-options';
export interface DefaultJobOptions {
/**
@@ -112,9 +114,4 @@ export interface BaseJobOptions extends DefaultJobOptions {
* Internal property used by repeatable jobs.
*/
prevMillis?: number;
-
- /**
- * TelemetryMetadata, provide for context propagation.
- */
- telemetryMetadata?: string;
}
diff --git a/src/interfaces/index.ts b/src/interfaces/index.ts
index 9c4e9c4f2a..3497bb32f8 100644
--- a/src/interfaces/index.ts
+++ b/src/interfaces/index.ts
@@ -7,6 +7,7 @@ export * from './deduplication-options';
export * from './flow-job';
export * from './ioredis-events';
export * from './job-json';
+export * from './job-scheduler-json';
export * from './keep-jobs';
export * from './metrics-options';
export * from './metrics';
diff --git a/src/interfaces/job-json.ts b/src/interfaces/job-json.ts
index e446a8f822..8045a2a09d 100644
--- a/src/interfaces/job-json.ts
+++ b/src/interfaces/job-json.ts
@@ -18,6 +18,7 @@ export interface JobJson {
parent?: ParentKeys;
parentKey?: string;
repeatJobKey?: string;
+ nextRepeatableJobKey?: string;
deduplicationId?: string;
processedBy?: string;
}
@@ -40,6 +41,7 @@ export interface JobJsonRaw {
parent?: string;
deid?: string;
rjk?: string;
+ nrjid?: string;
atm?: string;
ats?: string;
pb?: string; // Worker name
diff --git a/src/interfaces/job-scheduler-json.ts b/src/interfaces/job-scheduler-json.ts
new file mode 100644
index 0000000000..150d08873e
--- /dev/null
+++ b/src/interfaces/job-scheduler-json.ts
@@ -0,0 +1,18 @@
+import { JobSchedulerTemplateOptions } from '../types';
+
+export interface JobSchedulerTemplateJson {
+ data?: D;
+ opts?: JobSchedulerTemplateOptions;
+}
+
+export interface JobSchedulerJson {
+ key: string; // key is actually the job scheduler id
+ name: string;
+ id?: string | null;
+ endDate: number | null;
+ tz: string | null;
+ pattern: string | null;
+ every?: string | null;
+ next?: number;
+ template?: JobSchedulerTemplateJson;
+}
diff --git a/src/interfaces/minimal-job.ts b/src/interfaces/minimal-job.ts
index 60d4eb494a..63bf2feeb9 100644
--- a/src/interfaces/minimal-job.ts
+++ b/src/interfaces/minimal-job.ts
@@ -6,6 +6,7 @@ export type BulkJobOptions = Omit;
export interface MoveToDelayedOpts {
skipAttempt?: boolean;
+ fieldsToUpdate?: Record;
}
export interface MoveToWaitingChildrenOpts {
diff --git a/src/interfaces/parent.ts b/src/interfaces/parent.ts
index 8dd1fc0a92..ea00482321 100644
--- a/src/interfaces/parent.ts
+++ b/src/interfaces/parent.ts
@@ -1,4 +1,5 @@
import { JobsOptions } from '../types';
+
/**
* Describes the parent for a Job.
*/
diff --git a/src/interfaces/repeat-options.ts b/src/interfaces/repeat-options.ts
index 6c47311d90..65c5da3d4d 100644
--- a/src/interfaces/repeat-options.ts
+++ b/src/interfaces/repeat-options.ts
@@ -32,9 +32,7 @@ export interface RepeatOptions extends Omit {
/**
* Repeated job should start right now
- * (work only with every settings)
- *
- * @deprecated
+ * ( work only with cron settings)
*/
immediately?: boolean;
@@ -44,16 +42,15 @@ export interface RepeatOptions extends Omit {
count?: number;
/**
- * Internal property to store the previous time the job was executed.
- */
- prevMillis?: number;
+ * Offset in milliseconds to affect the next iteration time
+ *
+ * */
+ offset?: number;
/**
- * Internal property to store the offset to apply to the next iteration.
- *
- * @deprecated
+ * Internal property to store the previous time the job was executed.
*/
- offset?: number;
+ prevMillis?: number;
/**
* Internal property to store de job id
diff --git a/src/interfaces/telemetry.ts b/src/interfaces/telemetry.ts
index e55c0990dc..d39794d34a 100644
--- a/src/interfaces/telemetry.ts
+++ b/src/interfaces/telemetry.ts
@@ -36,8 +36,8 @@ export interface ContextManager {
/**
* Creates a new context and sets it as active for the fn passed as last argument
*
- * @param context
- * @param fn
+ * @param context -
+ * @param fn -
*/
with any>(
context: Context,
@@ -54,7 +54,7 @@ export interface ContextManager {
* is the mechanism used to propagate the context across a distributed
* application.
*
- * @param context
+ * @param context -
*/
getMetadata(context: Context): string;
@@ -62,8 +62,8 @@ export interface ContextManager {
* Creates a new context from a serialized version effectively
* linking the new context to the parent context.
*
- * @param activeContext
- * @param metadata
+ * @param activeContext -
+ * @param metadata -
*/
fromMetadata(activeContext: Context, metadata: string): Context;
}
@@ -78,9 +78,9 @@ export interface Tracer {
* context. If the context is not provided, the current active context should be
* used.
*
- * @param name
- * @param options
- * @param context
+ * @param name -
+ * @param options -
+ * @param context -
*/
startSpan(name: string, options?: SpanOptions, context?: Context): Span;
}
diff --git a/src/types/index.ts b/src/types/index.ts
index 7d30742ac9..c2ec6b49e8 100644
--- a/src/types/index.ts
+++ b/src/types/index.ts
@@ -3,5 +3,6 @@ export * from './finished-status';
export * from './minimal-queue';
export * from './job-json-sandbox';
export * from './job-options';
+export * from './job-scheduler-template-options';
export * from './job-type';
export * from './repeat-strategy';
diff --git a/src/types/job-json-sandbox.ts b/src/types/job-json-sandbox.ts
index c1a96d96a6..3181fbc3f8 100644
--- a/src/types/job-json-sandbox.ts
+++ b/src/types/job-json-sandbox.ts
@@ -1,4 +1,4 @@
-import { JobJson, ParentKeys } from '../interfaces';
+import { JobJson } from '../interfaces';
export type JobJsonSandbox = JobJson & {
queueName: string;
diff --git a/src/types/job-options.ts b/src/types/job-options.ts
index f81f92255c..b3e662f6eb 100644
--- a/src/types/job-options.ts
+++ b/src/types/job-options.ts
@@ -1,6 +1,10 @@
import { BaseJobOptions, DeduplicationOptions } from '../interfaces';
-export type JobsOptions = BaseJobOptions & {
+/**
+ * These options will be stored in Redis with smaller
+ * keys for compactness.
+ */
+export type CompressableJobOptions = {
/**
* Deduplication options.
*/
@@ -11,8 +15,26 @@ export type JobsOptions = BaseJobOptions & {
* @defaultValue fail
*/
onChildFailure?: 'fail' | 'ignore' | 'remove' | 'wait';
+
+ /**
+ * Telemetry options
+ */
+ telemetry?: {
+ /**
+ * Metadata, used for context propagation.
+ */
+ metadata?: string;
+
+ /**
+ * If `true` telemetry will omit the context propagation
+ * @default false
+ */
+ omitContext?: boolean;
+ };
};
+export type JobsOptions = BaseJobOptions & CompressableJobOptions;
+
/**
* These fields are the ones stored in Redis with smaller keys for compactness.
*/
@@ -36,4 +58,15 @@ export type RedisJobOptions = BaseJobOptions & {
* TelemetryMetadata, provide for context propagation.
*/
tm?: string;
+
+ /**
+ * Omit Context Propagation
+ */
+ omc?: boolean;
+
+ /**
+ * Deduplication identifier.
+ * @deprecated use deid
+ */
+ de?: string;
};
diff --git a/src/types/job-scheduler-template-options.ts b/src/types/job-scheduler-template-options.ts
new file mode 100644
index 0000000000..57bbfd6b31
--- /dev/null
+++ b/src/types/job-scheduler-template-options.ts
@@ -0,0 +1,6 @@
+import { JobsOptions } from './job-options';
+
+export type JobSchedulerTemplateOptions = Omit<
+ JobsOptions,
+ 'jobId' | 'repeat' | 'delay' | 'deduplication' | 'debounce'
+>;
diff --git a/src/utils.ts b/src/utils.ts
index aaf998a37c..b31d945d5e 100644
--- a/src/utils.ts
+++ b/src/utils.ts
@@ -59,6 +59,20 @@ export function array2obj(arr: string[]): Record {
return obj;
}
+export function objectToFlatArray(obj: Record): string[] {
+ const arr = [];
+ for (const key in obj) {
+ if (
+ Object.prototype.hasOwnProperty.call(obj, key) &&
+ obj[key] !== undefined
+ ) {
+ arr[arr.length] = key;
+ arr[arr.length] = obj[key];
+ }
+ }
+ return arr;
+}
+
export function delay(
ms: number,
abortController?: AbortController,
@@ -83,16 +97,21 @@ export function increaseMaxListeners(
emitter.setMaxListeners(maxListeners + count);
}
-export const invertObject = (obj: Record) => {
- return Object.entries(obj).reduce>(
- (encodeMap, [key, value]) => {
- encodeMap[value] = key;
- return encodeMap;
- },
- {},
- );
+type Invert> = {
+ [V in T[keyof T]]: {
+ [K in keyof T]: T[K] extends V ? K : never;
+ }[keyof T];
};
+export function invertObject>(
+ obj: T,
+): Invert {
+ return Object.entries(obj).reduce((result, [key, value]) => {
+ (result as Record)[value] = key;
+ return result;
+ }, {} as Invert);
+}
+
export function isRedisInstance(obj: any): obj is Redis | Cluster {
if (!obj) {
return false;
diff --git a/tests/test_bulk.ts b/tests/test_bulk.ts
index 1d6b8eb4ec..a3729885bc 100644
--- a/tests/test_bulk.ts
+++ b/tests/test_bulk.ts
@@ -85,7 +85,7 @@ describe('bulk jobs', () => {
data: { idx: 0, foo: 'bar' },
opts: {
parent: {
- id: parent.id,
+ id: parent.id!,
queue: `${prefix}:${parentQueueName}`,
},
},
@@ -95,7 +95,7 @@ describe('bulk jobs', () => {
data: { idx: 1, foo: 'baz' },
opts: {
parent: {
- id: parent.id,
+ id: parent.id!,
queue: `${prefix}:${parentQueueName}`,
},
},
@@ -122,22 +122,20 @@ describe('bulk jobs', () => {
it('should keep workers busy', async () => {
const numJobs = 6;
- const queue2 = new Queue(queueName, { connection, markerCount: 2, prefix });
-
const queueEvents = new QueueEvents(queueName, { connection, prefix });
await queueEvents.waitUntilReady();
const worker = new Worker(
queueName,
async () => {
- await delay(1000);
+ await delay(900);
},
{ connection, prefix },
);
const worker2 = new Worker(
queueName,
async () => {
- await delay(1000);
+ await delay(900);
},
{ connection, prefix },
);
@@ -153,10 +151,9 @@ describe('bulk jobs', () => {
data: { index },
}));
- await queue2.addBulk(jobs);
+ await queue.addBulk(jobs);
await completed;
- await queue2.close();
await worker.close();
await worker2.close();
await queueEvents.close();
diff --git a/tests/test_flow.ts b/tests/test_flow.ts
index 3669c00e54..cd6f866916 100644
--- a/tests/test_flow.ts
+++ b/tests/test_flow.ts
@@ -2313,7 +2313,142 @@ describe('flows', () => {
await removeAllQueueData(new IORedis(redisHost), parentQueueName);
await removeAllQueueData(new IORedis(redisHost), grandChildrenQueueName);
- }).timeout(8000);
+ });
+
+ describe('when removeOnFail option is provided', async () => {
+ it('should remove parent when child is moved to failed', async () => {
+ const name = 'child-job';
+
+ const parentQueueName = `parent-queue-${v4()}`;
+ const grandChildrenQueueName = `grand-children-queue-${v4()}`;
+
+ const parentQueue = new Queue(parentQueueName, {
+ connection,
+ prefix,
+ });
+ const grandChildrenQueue = new Queue(grandChildrenQueueName, {
+ connection,
+ prefix,
+ });
+ const queueEvents = new QueueEvents(parentQueueName, {
+ connection,
+ prefix,
+ });
+ await queueEvents.waitUntilReady();
+
+ let grandChildrenProcessor,
+ processedGrandChildren = 0;
+ const processingChildren = new Promise(resolve => {
+ grandChildrenProcessor = async () => {
+ processedGrandChildren++;
+
+ if (processedGrandChildren === 2) {
+ return resolve();
+ }
+
+ await delay(200);
+
+ throw new Error('failed');
+ };
+ });
+
+ const grandChildrenWorker = new Worker(
+ grandChildrenQueueName,
+ grandChildrenProcessor,
+ { connection, prefix },
+ );
+
+ await grandChildrenWorker.waitUntilReady();
+
+ const flow = new FlowProducer({ connection, prefix });
+ const tree = await flow.add({
+ name: 'parent-job',
+ queueName: parentQueueName,
+ data: {},
+ children: [
+ {
+ name,
+ data: { foo: 'bar' },
+ queueName,
+ },
+ {
+ name,
+ data: { foo: 'qux' },
+ queueName,
+ opts: { failParentOnFailure: true, removeOnFail: true },
+ children: [
+ {
+ name,
+ data: { foo: 'bar' },
+ queueName: grandChildrenQueueName,
+ opts: { failParentOnFailure: true },
+ },
+ {
+ name,
+ data: { foo: 'baz' },
+ queueName: grandChildrenQueueName,
+ },
+ ],
+ },
+ ],
+ });
+
+ const failed = new Promise(resolve => {
+ queueEvents.on('failed', async ({ jobId, failedReason, prev }) => {
+ if (jobId === tree.job.id) {
+ expect(prev).to.be.equal('waiting-children');
+ expect(failedReason).to.be.equal(
+ `child ${prefix}:${queueName}:${tree.children[1].job.id} failed`,
+ );
+ resolve();
+ }
+ });
+ });
+
+ expect(tree).to.have.property('job');
+ expect(tree).to.have.property('children');
+
+ const { children, job } = tree;
+ const parentState = await job.getState();
+
+ expect(parentState).to.be.eql('waiting-children');
+
+ await processingChildren;
+ await failed;
+
+ const { children: grandChildren } = children[1];
+ const updatedGrandchildJob = await grandChildrenQueue.getJob(
+ grandChildren[0].job.id,
+ );
+ const grandChildState = await updatedGrandchildJob.getState();
+
+ expect(grandChildState).to.be.eql('failed');
+ expect(updatedGrandchildJob.failedReason).to.be.eql('failed');
+
+ const updatedParentJob = await queue.getJob(children[1].job.id);
+ expect(updatedParentJob).to.be.undefined;
+
+ const updatedGrandparentJob = await parentQueue.getJob(job.id);
+ const updatedGrandparentState = await updatedGrandparentJob.getState();
+
+ expect(updatedGrandparentState).to.be.eql('failed');
+ expect(updatedGrandparentJob.failedReason).to.be.eql(
+ `child ${prefix}:${queueName}:${children[1].job.id} failed`,
+ );
+
+ await parentQueue.close();
+ await grandChildrenQueue.close();
+ await grandChildrenWorker.close();
+ await flow.close();
+ await queueEvents.close();
+
+ await removeAllQueueData(new IORedis(redisHost), parentQueueName);
+ await removeAllQueueData(
+ new IORedis(redisHost),
+ grandChildrenQueueName,
+ );
+ });
+ });
describe('when onChildFailure is provided as remove', async () => {
it('moves parent to wait after children fail', async () => {
diff --git a/tests/test_job_scheduler.ts b/tests/test_job_scheduler.ts
index f0f8ad5e4d..c2da779a83 100644
--- a/tests/test_job_scheduler.ts
+++ b/tests/test_job_scheduler.ts
@@ -40,7 +40,7 @@ describe('Job Scheduler', function () {
});
beforeEach(async function () {
- this.clock = sinon.useFakeTimers();
+ this.clock = sinon.useFakeTimers({ shouldClearNativeTimers: true });
queueName = `test-${v4()}`;
queue = new Queue(queueName, { connection, prefix });
repeat = new Repeat(queueName, { connection, prefix });
@@ -183,11 +183,17 @@ describe('Job Scheduler', function () {
const delayed = await queue.getDelayed();
expect(delayed).to.have.length(3);
+
+ const jobSchedulersCount = await queue.getJobSchedulersCount();
+ expect(jobSchedulersCount).to.be.eql(3);
});
});
describe('when job schedulers have same id and different every pattern', function () {
it('should create only one job scheduler', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
await Promise.all([
queue.upsertJobScheduler('test-scheduler1', { every: 1000 }),
queue.upsertJobScheduler('test-scheduler1', { every: 2000 }),
@@ -241,6 +247,9 @@ describe('Job Scheduler', function () {
});
it('should create job schedulers with different cron patterns', async function () {
+ const date = new Date('2017-02-07T15:24:00.000Z');
+ this.clock.setSystemTime(date);
+
const crons = [
'10 * * * * *',
'2 10 * * * *',
@@ -251,11 +260,11 @@ describe('Job Scheduler', function () {
await Promise.all([
queue.upsertJobScheduler('first', {
pattern: crons[0],
- endDate: 12345,
+ endDate: Date.now() + 12345,
}),
queue.upsertJobScheduler('second', {
pattern: crons[1],
- endDate: 610000,
+ endDate: Date.now() + 6100000,
}),
queue.upsertJobScheduler('third', {
pattern: crons[2],
@@ -270,9 +279,13 @@ describe('Job Scheduler', function () {
tz: 'Europa/Copenhaguen',
}),
]);
+
const count = await repeat.getRepeatableCount();
expect(count).to.be.eql(5);
+ const delayedCount = await queue.getDelayedCount();
+ expect(delayedCount).to.be.eql(5);
+
const jobs = await repeat.getRepeatableJobs(0, -1, true);
expect(jobs)
@@ -285,25 +298,25 @@ describe('Job Scheduler', function () {
tz: 'Europa/Copenhaguen',
pattern: null,
every: '5000',
- next: 5000,
+ next: 1486481040000,
})
.and.to.deep.include({
key: 'first',
name: 'first',
- endDate: 12345,
+ endDate: Date.now() + 12345,
tz: null,
pattern: '10 * * * * *',
every: null,
- next: 10000,
+ next: 1486481050000,
})
.and.to.deep.include({
key: 'second',
name: 'second',
- endDate: 610000,
+ endDate: Date.now() + 6100000,
tz: null,
pattern: '2 10 * * * *',
every: null,
- next: 602000,
+ next: 1486483802000,
})
.and.to.deep.include({
key: 'fourth',
@@ -312,7 +325,7 @@ describe('Job Scheduler', function () {
tz: 'Africa/Accra',
pattern: '2 * * 4 * *',
every: null,
- next: 259202000,
+ next: 1488585602000,
})
.and.to.deep.include({
key: 'third',
@@ -321,7 +334,7 @@ describe('Job Scheduler', function () {
tz: 'Africa/Abidjan',
pattern: '1 * * 5 * *',
every: null,
- next: 345601000,
+ next: 1488672001000,
});
});
@@ -339,7 +352,7 @@ describe('Job Scheduler', function () {
);
const delayStub = sinon.stub(worker, 'delay').callsFake(async () => {});
- const date = new Date('2017-02-07 9:24:00');
+ const date = new Date('2017-02-07T15:24:00.000Z');
this.clock.setSystemTime(date);
await queue.upsertJobScheduler(
@@ -357,6 +370,12 @@ describe('Job Scheduler', function () {
tz: null,
pattern: '*/2 * * * * *',
every: null,
+ next: 1486481042000,
+ template: {
+ data: {
+ foo: 'bar',
+ },
+ },
});
this.clock.tick(nextTick);
@@ -508,7 +527,7 @@ describe('Job Scheduler', function () {
const delay = 5 * ONE_SECOND + 500;
const worker = new Worker(
- queueName,
+ queueName2,
async () => {
this.clock.tick(nextTick);
},
@@ -516,7 +535,7 @@ describe('Job Scheduler', function () {
);
const delayStub = sinon.stub(worker, 'delay').callsFake(async () => {});
- await queue.upsertJobScheduler(
+ await queue2.upsertJobScheduler(
'test',
{
pattern: '*/2 * * * * *',
@@ -674,7 +693,7 @@ describe('Job Scheduler', function () {
);
const delayStub = sinon.stub(worker, 'delay').callsFake(async () => {});
- const date = new Date('2017-02-07 9:24:00');
+ const date = new Date('2017-02-07T15:24:00.000Z');
this.clock.setSystemTime(date);
const repeat = {
@@ -684,6 +703,18 @@ describe('Job Scheduler', function () {
name: 'rrule',
});
+ const scheduler = await queue.getJobScheduler('rrule');
+
+ expect(scheduler).to.deep.equal({
+ key: 'rrule',
+ name: 'rrule',
+ endDate: null,
+ next: 1486481042000,
+ tz: null,
+ pattern: 'RRULE:FREQ=SECONDLY;INTERVAL=2;WKST=MO',
+ every: null,
+ });
+
this.clock.tick(nextTick);
let prev: any;
@@ -752,54 +783,175 @@ describe('Job Scheduler', function () {
});
});
- it('should repeat every 2 seconds and start immediately', async function () {
- const date = new Date('2017-02-07 9:24:00');
- this.clock.setSystemTime(date);
- const nextTick = 2 * ONE_SECOND;
+ describe("when using 'every' option is on same millis as iteration time", function () {
+ it('should repeat every 2 seconds and start immediately', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+ const nextTick = 2 * ONE_SECOND;
- const worker = new Worker(
- queueName,
- async () => {
- this.clock.tick(nextTick);
- },
- { connection, prefix },
- );
+ const worker = new Worker(
+ queueName,
+ async () => {
+ this.clock.tick(nextTick);
+ },
+ { connection, prefix },
+ );
- let prev: Job;
- let counter = 0;
+ let prev: Job;
+ let counter = 0;
- const completing = new Promise((resolve, reject) => {
- worker.on('completed', async job => {
- try {
- if (prev && counter === 1) {
- expect(prev.timestamp).to.be.lte(job.timestamp);
- expect(job.timestamp - prev.timestamp).to.be.lte(1);
- } else if (prev) {
- expect(prev.timestamp).to.be.lt(job.timestamp);
- expect(job.timestamp - prev.timestamp).to.be.gte(2000);
+ const completing = new Promise((resolve, reject) => {
+ worker.on('completed', async job => {
+ try {
+ if (prev && counter === 1) {
+ expect(prev.timestamp).to.be.lte(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.lte(1);
+ } else if (prev) {
+ expect(prev.timestamp).to.be.lt(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.eq(2000);
+ }
+ prev = job;
+ counter++;
+ if (counter === 5) {
+ resolve();
+ }
+ } catch (err) {
+ reject(err);
}
- prev = job;
- counter++;
- if (counter === 5) {
- resolve();
+ });
+ });
+
+ await queue.upsertJobScheduler(
+ 'repeat',
+ {
+ every: 2000,
+ },
+ { data: { foo: 'bar' } },
+ );
+
+ const delayedCountBefore = await queue.getDelayedCount();
+ expect(delayedCountBefore).to.be.eq(1);
+
+ await completing;
+
+ const waitingCount = await queue.getWaitingCount();
+ expect(waitingCount).to.be.eq(0);
+
+ const delayedCountAfter = await queue.getDelayedCount();
+ expect(delayedCountAfter).to.be.eq(1);
+
+ await worker.close();
+ });
+ });
+
+ describe("when using 'every' and time is one millisecond before iteration time", function () {
+ it('should repeat every 2 seconds and start immediately', async function () {
+ const startTimeMillis = new Date('2017-02-07 9:24:00').getTime();
+
+ const date = new Date(startTimeMillis - 1);
+ this.clock.setSystemTime(date);
+ const nextTick = 2 * ONE_SECOND;
+
+ const worker = new Worker(
+ queueName,
+ async () => {
+ this.clock.tick(nextTick);
+ },
+ { connection, prefix },
+ );
+
+ let prev: Job;
+ let counter = 0;
+
+ const completing = new Promise((resolve, reject) => {
+ worker.on('completed', async job => {
+ try {
+ if (prev && counter === 1) {
+ expect(prev.timestamp).to.be.lte(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.lte(1);
+ } else if (prev) {
+ expect(prev.timestamp).to.be.lt(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.eq(2000);
+ }
+
+ prev = job;
+ counter++;
+ if (counter === 5) {
+ resolve();
+ }
+ } catch (err) {
+ reject(err);
}
- } catch (err) {
- reject(err);
- }
+ });
});
+
+ await queue.upsertJobScheduler(
+ 'repeat',
+ {
+ every: 2000,
+ },
+ { data: { foo: 'bar' } },
+ );
+
+ await completing;
+
+ await worker.close();
});
+ });
- await queue.upsertJobScheduler(
- 'repeat',
- {
- every: 2000,
- },
- { data: { foo: 'bar' } },
- );
+ describe("when using 'every' and time is one millisecond after iteration time", function () {
+ it('should repeat every 2 seconds and start immediately', async function () {
+ const startTimeMillis = new Date('2017-02-07 9:24:00').getTime() + 1;
- await completing;
+ const date = new Date(startTimeMillis);
+ this.clock.setSystemTime(date);
+ const nextTick = 2 * ONE_SECOND;
- await worker.close();
+ const worker = new Worker(
+ queueName,
+ async () => {
+ this.clock.tick(nextTick);
+ },
+ { connection, prefix },
+ );
+
+ let prev: Job;
+ let counter = 0;
+
+ const completing = new Promise((resolve, reject) => {
+ worker.on('completed', async job => {
+ try {
+ if (prev && counter === 1) {
+ expect(prev.timestamp).to.be.lte(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.lte(1);
+ } else if (prev) {
+ expect(prev.timestamp).to.be.lt(job.timestamp);
+ expect(job.timestamp - prev.timestamp).to.be.eq(2000);
+ }
+
+ prev = job;
+ counter++;
+ if (counter === 5) {
+ resolve();
+ }
+ } catch (err) {
+ reject(err);
+ }
+ });
+ });
+
+ await queue.upsertJobScheduler(
+ 'repeat',
+ {
+ every: 2000,
+ },
+ { data: { foo: 'bar' } },
+ );
+
+ await completing;
+
+ await worker.close();
+ });
});
it('should start immediately even after removing the job scheduler and adding it again', async function () {
@@ -829,7 +981,6 @@ describe('Job Scheduler', function () {
'repeat',
{
every: 2000,
- immediately: true,
},
{ data: { foo: 'bar' } },
);
@@ -863,7 +1014,6 @@ describe('Job Scheduler', function () {
'repeat',
{
every: 2000,
- immediately: true,
},
{ data: { foo: 'bar' } },
);
@@ -1175,12 +1325,15 @@ describe('Job Scheduler', function () {
});
});
- await queue.upsertJobScheduler('repeat', {
+ const job = await queue.upsertJobScheduler('repeat', {
pattern: '0 1 * * *',
endDate: new Date('2017-05-10 13:13:00'),
tz: 'Europe/Athens',
utc: true,
});
+
+ expect(job).to.be.ok;
+
this.clock.tick(nextTick + delay);
worker.run();
@@ -1192,7 +1345,7 @@ describe('Job Scheduler', function () {
});
it('should repeat 7:th day every month at 9:25', async function () {
- this.timeout(15000);
+ this.timeout(12000);
const date = new Date('2017-02-02 7:21:42');
this.clock.setSystemTime(date);
@@ -1241,7 +1394,7 @@ describe('Job Scheduler', function () {
worker.run();
- await queue.upsertJobScheduler('repeat', { pattern: '* 25 9 7 * *' });
+ await queue.upsertJobScheduler('repeat', { pattern: '25 9 7 * *' });
nextTick();
await completing;
@@ -1403,35 +1556,401 @@ describe('Job Scheduler', function () {
});
});
- it('should keep only one delayed job if adding a new repeatable job with the same id', async function () {
- const date = new Date('2017-02-07 9:24:00');
- const key = 'mykey';
+ describe('when repeatable job fails', function () {
+ it('should continue repeating', async function () {
+ const date = new Date('2017-02-07T15:24:00.000Z');
+ this.clock.setSystemTime(date);
+ const repeatOpts = {
+ pattern: '0 * 1 * *',
+ tz: 'Asia/Calcutta',
+ };
- this.clock.setSystemTime(date);
+ const worker = new Worker(
+ queueName,
+ async () => {
+ throw new Error('failed');
+ },
+ {
+ connection,
+ prefix,
+ },
+ );
- const nextTick = 2 * ONE_SECOND;
+ const failing = new Promise(resolve => {
+ worker.on('failed', () => {
+ resolve();
+ });
+ });
- await queue.upsertJobScheduler(key, {
- every: 10_000,
+ const repeatableJob = await queue.upsertJobScheduler('test', repeatOpts, {
+ name: 'a',
+ data: { foo: 'bar' },
+ opts: { priority: 1 },
+ });
+ const delayedCount = await queue.getDelayedCount();
+ expect(delayedCount).to.be.equal(1);
+
+ await repeatableJob!.promote();
+ await failing;
+
+ const failedCount = await queue.getFailedCount();
+ expect(failedCount).to.be.equal(1);
+
+ const delayedCount2 = await queue.getDelayedCount();
+ expect(delayedCount2).to.be.equal(1);
+
+ const jobSchedulers = await queue.getJobSchedulers();
+
+ const count = await queue.count();
+ expect(count).to.be.equal(1);
+ expect(jobSchedulers).to.have.length(1);
+
+ expect(jobSchedulers[0]).to.deep.equal({
+ key: 'test',
+ name: 'a',
+ endDate: null,
+ tz: 'Asia/Calcutta',
+ pattern: '0 * 1 * *',
+ every: null,
+ next: 1488310200000,
+ template: {
+ data: {
+ foo: 'bar',
+ },
+ opts: {
+ priority: 1,
+ },
+ },
+ });
+
+ await worker.close();
});
- this.clock.tick(nextTick);
+ it('should not create a new delayed job if the failed job is retried with retryJobs', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
+ const repeatOpts = {
+ every: 579,
+ };
+
+ let isFirstRun = true;
+
+ let worker;
+ const processingAfterFailing = new Promise(resolve => {
+ worker = new Worker(
+ queueName,
+ async () => {
+ this.clock.tick(177);
+ if (isFirstRun) {
+ isFirstRun = false;
+ throw new Error('failed');
+ }
+ resolve();
+ },
+ {
+ connection,
+ prefix,
+ },
+ );
+ });
+
+ const failing = new Promise(resolve => {
+ worker.on('failed', async () => {
+ resolve();
+ });
+ });
+
+ const repeatableJob = await queue.upsertJobScheduler('test', repeatOpts);
+
+ await repeatableJob!.promote();
+
+ const delayedCountBeforeFailing = await queue.getDelayedCount();
+ expect(delayedCountBeforeFailing).to.be.equal(0);
+
+ await failing;
+
+ const failedCount = await queue.getFailedCount();
+ expect(failedCount).to.be.equal(1);
+
+ const delayedCountAfterFailing = await queue.getDelayedCount();
+ expect(delayedCountAfterFailing).to.be.equal(1);
+
+ // Retry the failed job
+ this.clock.tick(1143);
+ await queue.retryJobs({ state: 'failed' });
+ const failedCountAfterRetry = await queue.getFailedCount();
+ expect(failedCountAfterRetry).to.be.equal(0);
+
+ await processingAfterFailing;
+
+ await worker.close();
+
+ const delayedCount2 = await queue.getDelayedCount();
+ expect(delayedCount2).to.be.equal(1);
+
+ const waitingCount = await queue.getWaitingCount();
+ expect(waitingCount).to.be.equal(0);
+ });
+
+ it('should not create a new delayed job if the failed job is retried with Job.retry()', async function () {
+ let expectError;
- let jobs = await queue.getJobSchedulers();
- expect(jobs).to.have.length(1);
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
+ const repeatOpts = {
+ every: 477,
+ };
+
+ let isFirstRun = true;
+
+ const worker = new Worker(
+ queueName,
+ async () => {
+ this.clock.tick(177);
+
+ try {
+ const delayedCount = await queue.getDelayedCount();
+ expect(delayedCount).to.be.equal(1);
+ } catch (error) {
+ expectError = error;
+ }
- let delayedJobs = await queue.getDelayed();
- expect(delayedJobs).to.have.length(1);
+ if (isFirstRun) {
+ isFirstRun = false;
+ throw new Error('failed');
+ }
+ },
+ {
+ connection,
+ prefix,
+ },
+ );
+
+ const failing = new Promise(resolve => {
+ worker.on('failed', async () => {
+ resolve();
+ });
+ });
- await queue.upsertJobScheduler(key, {
- every: 35_160,
+ const repeatableJob = await queue.upsertJobScheduler('test', repeatOpts);
+
+ await repeatableJob!.promote();
+
+ const delayedCount = await queue.getDelayedCount();
+ expect(delayedCount).to.be.equal(0);
+
+ this.clock.tick(177);
+
+ await failing;
+
+ this.clock.tick(177);
+
+ const failedJobs = await queue.getFailed();
+ expect(failedJobs.length).to.be.equal(1);
+
+ // Retry the failed job
+ const failedJob = await queue.getJob(failedJobs[0].id);
+ await failedJob!.retry();
+ const failedCountAfterRetry = await queue.getFailedCount();
+ expect(failedCountAfterRetry).to.be.equal(0);
+
+ this.clock.tick(177);
+
+ await worker.close();
+
+ if (expectError) {
+ throw expectError;
+ }
+
+ const delayedCount2 = await queue.getDelayedCount();
+ expect(delayedCount2).to.be.equal(1);
});
- jobs = await queue.getJobSchedulers();
- expect(jobs).to.have.length(1);
+ it('should not create a new delayed job if the failed job is stalled and moved back to wait', async function () {
+ // Note, this test is expected to throw an exception like this:
+ // "Error: Missing lock for job repeat:test:1486455840000. moveToFinished"
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
+ const repeatOpts = {
+ every: 2000,
+ };
- delayedJobs = await queue.getDelayed();
- expect(delayedJobs).to.have.length(1);
+ const repeatableJob = await queue.upsertJobScheduler('test', repeatOpts);
+ expect(repeatableJob).to.be.ok;
+
+ const delayedCount = await queue.getDelayedCount();
+ expect(delayedCount).to.be.equal(1);
+
+ await repeatableJob!.promote();
+
+ let resolveCompleting: () => void;
+ const complettingJob = new Promise(resolve => {
+ resolveCompleting = resolve;
+ });
+
+ let worker: Worker;
+ const processing = new Promise(resolve => {
+ worker = new Worker(
+ queueName,
+ async () => {
+ resolve();
+ return complettingJob;
+ },
+ {
+ connection,
+ prefix,
+ skipLockRenewal: true,
+ skipStalledCheck: true,
+ },
+ );
+ });
+
+ await processing;
+
+ // force remove the lock
+ const client = await queue.client;
+ const lockKey = `${prefix}:${queueName}:${repeatableJob!.id}:lock`;
+ await client.del(lockKey);
+
+ const stalledCheckerKey = `${prefix}:${queueName}:stalled-check`;
+ await client.del(stalledCheckerKey);
+
+ const scripts = (worker!).scripts;
+ let [failed, stalled] = await scripts.moveStalledJobsToWait();
+
+ await client.del(stalledCheckerKey);
+
+ [failed, stalled] = await scripts.moveStalledJobsToWait();
+
+ const waitingJobs = await queue.getWaiting();
+ expect(waitingJobs.length).to.be.equal(1);
+
+ await this.clock.tick(500);
+
+ resolveCompleting!();
+ await worker!.close();
+
+ await this.clock.tick(500);
+
+ const delayedCount2 = await queue.getDelayedCount();
+ expect(delayedCount2).to.be.equal(1);
+
+ let completedJobs = await queue.getCompleted();
+ expect(completedJobs.length).to.be.equal(0);
+
+ const processing2 = new Promise(resolve => {
+ worker = new Worker(
+ queueName,
+ async () => {
+ resolve();
+ },
+ {
+ connection,
+ prefix,
+ skipLockRenewal: true,
+ skipStalledCheck: true,
+ },
+ );
+ });
+
+ await processing2;
+
+ await worker!.close();
+
+ completedJobs = await queue.getCompleted();
+ expect(completedJobs.length).to.be.equal(1);
+
+ const waitingJobs2 = await queue.getWaiting();
+ expect(waitingJobs2.length).to.be.equal(0);
+
+ const delayedCount3 = await queue.getDelayedCount();
+ expect(delayedCount3).to.be.equal(1);
+ });
+ });
+
+ describe('when every option is provided', function () {
+ it('should keep only one delayed job if adding a new repeatable job with the same id', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ const key = 'mykey';
+
+ this.clock.setSystemTime(date);
+
+ const nextTick = 2 * ONE_SECOND;
+
+ await queue.upsertJobScheduler(key, {
+ every: 10_000,
+ });
+
+ this.clock.tick(nextTick);
+
+ let jobs = await queue.getJobSchedulers();
+ expect(jobs).to.have.length(1);
+
+ let delayedJobs = await queue.getDelayed();
+ expect(delayedJobs).to.have.length(1);
+
+ await queue.upsertJobScheduler(key, {
+ every: 35_160,
+ });
+
+ jobs = await queue.getJobSchedulers();
+ expect(jobs).to.have.length(1);
+
+ delayedJobs = await queue.getDelayed();
+ expect(delayedJobs).to.have.length(1);
+ });
+ });
+
+ describe('when pattern option is provided', function () {
+ it('should keep only one delayed job if adding a new repeatable job with the same id', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ const key = 'mykey';
+
+ this.clock.setSystemTime(date);
+
+ const nextTick = 2 * ONE_SECOND;
+
+ await queue.upsertJobScheduler(
+ key,
+ {
+ pattern: '0 * 1 * *',
+ },
+ { name: 'test1', data: { foo: 'bar' }, opts: { priority: 1 } },
+ );
+
+ this.clock.tick(nextTick);
+
+ let jobs = await queue.getJobSchedulers();
+ expect(jobs).to.have.length(1);
+
+ let delayedJobs = await queue.getDelayed();
+ expect(delayedJobs).to.have.length(1);
+
+ await queue.upsertJobScheduler(
+ key,
+ {
+ pattern: '0 * 1 * *',
+ },
+ { name: 'test2', data: { foo: 'baz' }, opts: { priority: 2 } },
+ );
+
+ jobs = await queue.getJobSchedulers();
+ expect(jobs).to.have.length(1);
+
+ delayedJobs = await queue.getDelayed();
+ expect(delayedJobs).to.have.length(1);
+
+ expect(delayedJobs[0].name).to.be.equal('test2');
+ expect(delayedJobs[0].data).to.deep.equal({
+ foo: 'baz',
+ });
+ expect(delayedJobs[0].opts).to.deep.include({
+ priority: 2,
+ });
+ });
});
// This test is flaky and too complex we need something simpler that tests the same thing
@@ -1537,6 +2056,9 @@ describe('Job Scheduler', function () {
}).timeout(8000);
it('should not allow to remove a delayed job if it belongs to a repeatable job', async function () {
+ const date = new Date('2019-07-13 1:58:23');
+ this.clock.setSystemTime(date);
+
const repeat = {
every: 1000,
};
@@ -1555,6 +2077,9 @@ describe('Job Scheduler', function () {
});
it('should not remove delayed jobs if they belong to a repeatable job when using drain', async function () {
+ const date = new Date('2014-09-03 5:32:12');
+ this.clock.setSystemTime(date);
+
await queue.upsertJobScheduler('myTestJob', { every: 5000 });
await queue.add('test', { foo: 'bar' }, { delay: 1000 });
@@ -1572,6 +2097,9 @@ describe('Job Scheduler', function () {
});
it('should not remove delayed jobs if they belong to a repeatable job when using clean', async function () {
+ const date = new Date('2012-08-05 2:32:12');
+ this.clock.setSystemTime(date);
+
await queue.upsertJobScheduler('myTestJob', { every: 5000 });
await queue.add('test', { foo: 'bar' }, { delay: 1000 });
@@ -1589,6 +2117,9 @@ describe('Job Scheduler', function () {
});
it("should keep one delayed job if updating a repeatable job's every option", async function () {
+ const date = new Date('2022-01-08 7:22:21');
+ this.clock.setSystemTime(date);
+
await queue.upsertJobScheduler('myTestJob', { every: 5000 });
await queue.upsertJobScheduler('myTestJob', { every: 4000 });
await queue.upsertJobScheduler('myTestJob', { every: 5000 });
@@ -1828,6 +2359,9 @@ describe('Job Scheduler', function () {
});
it("should return a valid job with the job's options and data passed as the job template", async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
const repeatOpts = {
every: 1000,
};
@@ -1877,6 +2411,9 @@ describe('Job Scheduler', function () {
});
it('should have the right count value', async function () {
+ const date = new Date('2017-02-07 9:24:00');
+ this.clock.setSystemTime(date);
+
await queue.upsertJobScheduler('test', { every: 1000 });
this.clock.tick(ONE_SECOND + 100);
diff --git a/tests/test_metrics.ts b/tests/test_metrics.ts
index 6433afc8b2..f59d4e44d7 100644
--- a/tests/test_metrics.ts
+++ b/tests/test_metrics.ts
@@ -28,7 +28,7 @@ describe('metrics', function () {
});
beforeEach(function () {
- this.clock = sinon.useFakeTimers();
+ this.clock = sinon.useFakeTimers({ shouldClearNativeTimers: true });
});
beforeEach(async function () {
diff --git a/tests/test_obliterate.ts b/tests/test_obliterate.ts
index 89b8223b5a..f6ded0610b 100644
--- a/tests/test_obliterate.ts
+++ b/tests/test_obliterate.ts
@@ -47,6 +47,19 @@ describe('Obliterate', function () {
expect(keys.length).to.be.eql(0);
});
+ it('should obliterate a queue which is empty but has had jobs in the past', async () => {
+ await queue.waitUntilReady();
+
+ const job = await queue.add('test', { foo: 'bar' });
+ await job.remove();
+
+ await queue.obliterate();
+
+ const client = await queue.client;
+ const keys = await client.keys(`${prefix}:${queue.name}:*`);
+ expect(keys.length).to.be.eql(0);
+ });
+
it('should obliterate a queue with jobs in different statuses', async () => {
await queue.waitUntilReady();
diff --git a/tests/test_queue.ts b/tests/test_queue.ts
index 51c9ed9c51..988938a43f 100644
--- a/tests/test_queue.ts
+++ b/tests/test_queue.ts
@@ -37,6 +37,21 @@ describe('queues', function () {
await connection.quit();
});
+ describe('use generics', function () {
+ it('should be able to use generics', async function () {
+ const queue = new Queue<{ foo: string; bar: number }>(queueName, {
+ prefix,
+ connection,
+ });
+
+ const job = await queue.add(queueName, { foo: 'bar', bar: 1 });
+ const job2 = await queue.getJob(job.id!);
+ expect(job2?.data.foo).to.be.eql('bar');
+ expect(job2?.data.bar).to.be.eql(1);
+ await queue.close();
+ });
+ });
+
it('should return the queue version', async () => {
const queue = new Queue(queueName, { connection });
const version = await queue.getVersion();
diff --git a/tests/test_repeat.ts b/tests/test_repeat.ts
index ed6ee6e301..4685c08803 100644
--- a/tests/test_repeat.ts
+++ b/tests/test_repeat.ts
@@ -45,7 +45,7 @@ describe('repeat', function () {
});
beforeEach(async function () {
- this.clock = sinon.useFakeTimers();
+ this.clock = sinon.useFakeTimers({ shouldClearNativeTimers: true });
queueName = `test-${v4()}`;
queue = new Queue(queueName, { connection, prefix });
repeat = new Repeat(queueName, { connection, prefix });
@@ -1110,7 +1110,7 @@ describe('repeat', function () {
});
it('should repeat 7:th day every month at 9:25', async function () {
- this.timeout(15000);
+ this.timeout(12000);
const date = new Date('2017-02-02 7:21:42');
this.clock.setSystemTime(date);
@@ -1162,7 +1162,7 @@ describe('repeat', function () {
await queue.add(
'repeat',
{ foo: 'bar' },
- { repeat: { pattern: '* 25 9 7 * *' } },
+ { repeat: { pattern: '25 9 7 * *' } },
);
nextTick();
diff --git a/tests/test_telemetry_interface.ts b/tests/test_telemetry_interface.ts
index 828d701339..fa6ff4290a 100644
--- a/tests/test_telemetry_interface.ts
+++ b/tests/test_telemetry_interface.ts
@@ -2,7 +2,7 @@ import { expect, assert } from 'chai';
import { default as IORedis } from 'ioredis';
import { after, beforeEach, describe, it, before } from 'mocha';
import { v4 } from 'uuid';
-import { FlowProducer, Queue, Worker } from '../src/classes';
+import { FlowProducer, Job, JobScheduler, Queue, Worker } from '../src/classes';
import { removeAllQueueData } from '../src/utils';
import {
Telemetry,
@@ -16,7 +16,6 @@ import {
} from '../src/interfaces';
import * as sinon from 'sinon';
import { SpanKind, TelemetryAttributes } from '../src/enums';
-import { JobScheduler } from '../src/classes/job-scheduler';
describe('Telemetry', () => {
type ExtendedException = Exception & {
@@ -94,7 +93,7 @@ describe('Telemetry', () => {
this.options = options;
}
- setSpanOnContext(ctx: any): any {
+ setSpanOnContext(ctx: any, omitContext?: boolean): any {
context['getSpan'] = () => this;
return { ...context, getMetadata_span: this['name'] };
}
@@ -261,6 +260,7 @@ describe('Telemetry', () => {
});
it('should correctly handle errors and record them in telemetry for upsertJobScheduler', async () => {
+ const originalCreateNextJob = JobScheduler.prototype.createNextJob;
const recordExceptionSpy = sinon.spy(
MockSpan.prototype,
'recordException',
@@ -284,6 +284,7 @@ describe('Telemetry', () => {
const recordedError = recordExceptionSpy.firstCall.args[0];
assert.equal(recordedError.message, errMessage);
} finally {
+ JobScheduler.prototype.createNextJob = originalCreateNextJob;
recordExceptionSpy.restore();
}
});
@@ -297,6 +298,7 @@ describe('Telemetry', () => {
connection,
telemetry: telemetryClient,
name: 'testWorker',
+ prefix,
});
await worker.waitUntilReady();
@@ -329,6 +331,7 @@ describe('Telemetry', () => {
const worker = new Worker(queueName, async () => 'some result', {
connection,
telemetry: telemetryClient,
+ prefix,
});
await worker.waitUntilReady();
@@ -350,17 +353,20 @@ describe('Telemetry', () => {
const flowProducer = new FlowProducer({
connection,
telemetry: telemetryClient,
+ prefix,
});
const traceSpy = sinon.spy(telemetryClient.tracer, 'startSpan');
const testFlow = {
name: 'parentJob',
queueName,
+ prefix,
data: { foo: 'bar' },
children: [
{
name: 'childJob',
queueName,
+ prefix,
data: { baz: 'qux' },
},
],
@@ -386,6 +392,7 @@ describe('Telemetry', () => {
const flowProducer = new FlowProducer({
connection,
telemetry: telemetryClient,
+ prefix,
});
const traceSpy = sinon.spy(telemetryClient.tracer, 'startSpan');
@@ -421,6 +428,7 @@ describe('Telemetry', () => {
const flowProducer = new FlowProducer({
connection,
telemetry: telemetryClient,
+ prefix,
});
const traceSpy = sinon.spy(telemetryClient.tracer, 'startSpan');
@@ -473,6 +481,7 @@ describe('Telemetry', () => {
const flowProducer = new FlowProducer({
connection,
telemetry: telemetryClient,
+ prefix,
});
const traceSpy = sinon.spy(telemetryClient.tracer, 'startSpan');
@@ -512,4 +521,137 @@ describe('Telemetry', () => {
}
});
});
+
+ describe('Omit Propagation', () => {
+ let fromMetadataSpy;
+
+ beforeEach(() => {
+ fromMetadataSpy = sinon.spy(
+ telemetryClient.contextManager,
+ 'fromMetadata',
+ );
+ });
+
+ afterEach(() => fromMetadataSpy.restore());
+
+ it('should omit propagation on queue add', async () => {
+ let worker;
+ const processing = new Promise(resolve => {
+ worker = new Worker(queueName, async () => resolve(), {
+ connection,
+ telemetry: telemetryClient,
+ prefix,
+ });
+ });
+
+ const job = await queue.add(
+ 'testJob',
+ { foo: 'bar' },
+ { telemetry: { omitContext: true } },
+ );
+
+ await processing;
+
+ expect(fromMetadataSpy.callCount).to.equal(0);
+ await worker.close();
+ });
+
+ it('should omit propagation on queue addBulk', async () => {
+ let worker;
+ const processing = new Promise(resolve => {
+ worker = new Worker(queueName, async () => resolve(), {
+ connection,
+ telemetry: telemetryClient,
+ prefix,
+ });
+ });
+
+ const jobs = [
+ {
+ name: 'job1',
+ data: { foo: 'bar' },
+ opts: { telemetry: { omitContext: true } },
+ },
+ ];
+ const addedJos = await queue.addBulk(jobs);
+ expect(addedJos).to.have.length(1);
+
+ await processing;
+
+ expect(fromMetadataSpy.callCount).to.equal(0);
+ await worker.close();
+ });
+
+ it('should omit propagation on job scheduler', async () => {
+ let worker;
+ const processing = new Promise(resolve => {
+ worker = new Worker(queueName, async () => resolve(), {
+ connection,
+ telemetry: telemetryClient,
+ prefix,
+ });
+ });
+
+ const jobSchedulerId = 'testJobScheduler';
+ const data = { foo: 'bar' };
+
+ const job = await queue.upsertJobScheduler(
+ jobSchedulerId,
+ { every: 1000, endDate: Date.now() + 1000, limit: 1 },
+ {
+ name: 'repeatable-job',
+ data,
+ opts: { telemetry: { omitContext: true } },
+ },
+ );
+
+ await processing;
+
+ expect(fromMetadataSpy.callCount).to.equal(0);
+ await worker.close();
+ });
+
+ it('should omit propagation on flow producer', async () => {
+ let worker;
+ const processing = new Promise(resolve => {
+ worker = new Worker(queueName, async () => resolve(), {
+ connection,
+ telemetry: telemetryClient,
+ prefix,
+ });
+ });
+
+ const flowProducer = new FlowProducer({
+ connection,
+ telemetry: telemetryClient,
+ prefix,
+ });
+
+ const testFlow = {
+ name: 'parentJob',
+ queueName,
+ data: { foo: 'bar' },
+ children: [
+ {
+ name: 'childJob',
+ queueName,
+ data: { baz: 'qux' },
+ opts: { telemetry: { omitContext: true } },
+ },
+ ],
+ opts: { telemetry: { omitContext: true } },
+ };
+
+ const jobNode = await flowProducer.add(testFlow);
+ const jobs = jobNode.children
+ ? [jobNode.job, ...jobNode.children.map(c => c.job)]
+ : [jobNode.job];
+
+ await processing;
+
+ expect(fromMetadataSpy.callCount).to.equal(0);
+ await flowProducer.close();
+ await worker.close();
+ });
+ });
});
diff --git a/tests/test_worker.ts b/tests/test_worker.ts
index 4ed21e6d6d..e72f348c36 100644
--- a/tests/test_worker.ts
+++ b/tests/test_worker.ts
@@ -477,18 +477,25 @@ describe('workers', function () {
await worker.waitUntilReady();
// Add spy to worker.moveToActive
- const spy = sinon.spy(worker, 'moveToActive');
+ const spy = sinon.spy(worker as any, 'moveToActive');
const bclientSpy = sinon.spy(
await (worker as any).blockingConnection.client,
'bzpopmin',
);
- for (let i = 0; i < numJobs; i++) {
- const job = await queue.add('test', { foo: 'bar' });
- expect(job.id).to.be.ok;
- expect(job.data.foo).to.be.eql('bar');
+ const jobsData: { name: string; data: any }[] = [];
+ for (let j = 0; j < numJobs; j++) {
+ jobsData.push({
+ name: 'test',
+ data: { foo: 'bar' },
+ });
}
+ await queue.addBulk(jobsData);
+
+ expect(bclientSpy.callCount).to.be.gte(0);
+ expect(bclientSpy.callCount).to.be.lte(1);
+
await new Promise((resolve, reject) => {
worker.on('completed', (_job: Job, _result: any) => {
completedJobs++;
@@ -526,7 +533,7 @@ describe('workers', function () {
);
// Add spy to worker.moveToActive
- const spy = sinon.spy(worker, 'moveToActive');
+ const spy = sinon.spy(worker as any, 'moveToActive');
const bclientSpy = sinon.spy(
await (worker as any).blockingConnection.client,
'bzpopmin',
diff --git a/yarn.lock b/yarn.lock
index f0742e0051..f68b48af72 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -1002,27 +1002,27 @@
dependencies:
type-detect "4.0.8"
-"@sinonjs/commons@^3.0.0":
+"@sinonjs/commons@^3.0.0", "@sinonjs/commons@^3.0.1":
version "3.0.1"
resolved "https://registry.yarnpkg.com/@sinonjs/commons/-/commons-3.0.1.tgz#1029357e44ca901a615585f6d27738dbc89084cd"
integrity sha512-K3mCHKQ9sVh8o1C9cxkwxaOmXoAMlDxC1mYyHrjqOWEcBjYr76t96zL2zlj5dUGZ3HSw240X1qgH3Mjf1yJWpQ==
dependencies:
type-detect "4.0.8"
-"@sinonjs/fake-timers@^10.3.0":
- version "10.3.0"
- resolved "https://registry.yarnpkg.com/@sinonjs/fake-timers/-/fake-timers-10.3.0.tgz#55fdff1ecab9f354019129daf4df0dd4d923ea66"
- integrity sha512-V4BG07kuYSUkTCSBHG8G8TNhM+F19jXFWnQtzj+we8DrkpSBCee9Z3Ms8yiGer/dlmhe35/Xdgyo3/0rQKg7YA==
- dependencies:
- "@sinonjs/commons" "^3.0.0"
-
-"@sinonjs/fake-timers@^11.2.2":
+"@sinonjs/fake-timers@11.2.2":
version "11.2.2"
resolved "https://registry.yarnpkg.com/@sinonjs/fake-timers/-/fake-timers-11.2.2.tgz#50063cc3574f4a27bd8453180a04171c85cc9699"
integrity sha512-G2piCSxQ7oWOxwGSAyFHfPIsyeJGXYtc6mFbnFA+kRXkiEnTl8c/8jul2S329iFBnDI9HGoeWWAZvuvOkZccgw==
dependencies:
"@sinonjs/commons" "^3.0.0"
+"@sinonjs/fake-timers@^13.0.1":
+ version "13.0.5"
+ resolved "https://registry.yarnpkg.com/@sinonjs/fake-timers/-/fake-timers-13.0.5.tgz#36b9dbc21ad5546486ea9173d6bea063eb1717d5"
+ integrity sha512-36/hTbH2uaWuGVERyC6da9YwGWnzUZXuPro/F2LfsdOsLnCojz/iSH8MxUt/FD2S5XBSVPhmArFUXcpCQ2Hkiw==
+ dependencies:
+ "@sinonjs/commons" "^3.0.1"
+
"@sinonjs/samsam@^8.0.0":
version "8.0.0"
resolved "https://registry.yarnpkg.com/@sinonjs/samsam/-/samsam-8.0.0.tgz#0d488c91efb3fa1442e26abea81759dfc8b5ac60"
@@ -1032,10 +1032,10 @@
lodash.get "^4.4.2"
type-detect "^4.0.8"
-"@sinonjs/text-encoding@^0.7.2":
- version "0.7.2"
- resolved "https://registry.yarnpkg.com/@sinonjs/text-encoding/-/text-encoding-0.7.2.tgz#5981a8db18b56ba38ef0efb7d995b12aa7b51918"
- integrity sha512-sXXKG+uL9IrKqViTtao2Ws6dy0znu9sOaP1di/jKGW1M6VssO8vlpXCQcpZ+jisQ1tTFAC5Jo/EOzFbggBagFQ==
+"@sinonjs/text-encoding@^0.7.3":
+ version "0.7.3"
+ resolved "https://registry.yarnpkg.com/@sinonjs/text-encoding/-/text-encoding-0.7.3.tgz#282046f03e886e352b2d5f5da5eb755e01457f3f"
+ integrity sha512-DE427ROAphMQzU4ENbliGYrBSYPXF+TtLg9S8vzeA+OF4ZKzoDdzfL8sxuMUGS/lgRhM6j1URSk9ghf7Xo1tyA==
"@tootallnate/once@2":
version "2.0.0"
@@ -1143,10 +1143,17 @@
resolved "https://registry.yarnpkg.com/@types/semver/-/semver-7.5.8.tgz#8268a8c57a3e4abd25c165ecd36237db7948a55e"
integrity sha512-I8EUhyrgfLrcTkzV3TSsGyl1tSuPrEDzr0yd5m90UgNxQkyDXULk3b6MlQqTCpZpNtWe1K0hzclnZkTcLBe2UQ==
-"@types/sinon@^7.5.2":
- version "7.5.2"
- resolved "https://registry.yarnpkg.com/@types/sinon/-/sinon-7.5.2.tgz#5e2f1d120f07b9cda07e5dedd4f3bf8888fccdb9"
- integrity sha512-T+m89VdXj/eidZyejvmoP9jivXgBDdkOSBVQjU9kF349NEx10QdPNGxHeZUaj1IlJ32/ewdyXJjnJxyxJroYwg==
+"@types/sinon@^10.0.13":
+ version "10.0.20"
+ resolved "https://registry.yarnpkg.com/@types/sinon/-/sinon-10.0.20.tgz#f1585debf4c0d99f9938f4111e5479fb74865146"
+ integrity sha512-2APKKruFNCAZgx3daAyACGzWuJ028VVCUDk6o2rw/Z4PXT0ogwdV4KUegW0MwVs0Zu59auPXbbuBJHF12Sx1Eg==
+ dependencies:
+ "@types/sinonjs__fake-timers" "*"
+
+"@types/sinonjs__fake-timers@*":
+ version "8.1.5"
+ resolved "https://registry.yarnpkg.com/@types/sinonjs__fake-timers/-/sinonjs__fake-timers-8.1.5.tgz#5fd3592ff10c1e9695d377020c033116cc2889f2"
+ integrity sha512-mQkU2jY8jJEF7YHjHvsQO8+3ughTL1mcnn96igfhONmR+fUPSKIkefQYpSe8bsly2Ep7oQbn/6VG5/9/0qcArQ==
"@types/uuid@^3.4.10":
version "3.4.13"
@@ -2548,7 +2555,7 @@ diff@^4.0.1:
resolved "https://registry.yarnpkg.com/diff/-/diff-4.0.2.tgz#60f3aecb89d5fae520c11aa19efc2bb982aade7d"
integrity sha512-58lmxKSA4BNyLz+HHMUzlOEpg09FV+ev6ZMe3vJihgdxzgcwZ8VoEEPmALCZG9LmqfVoNMMKpttIYTVG6uDY7A==
-diff@^5.1.0:
+diff@^5.1.0, diff@^5.2.0:
version "5.2.0"
resolved "https://registry.yarnpkg.com/diff/-/diff-5.2.0.tgz#26ded047cd1179b78b9537d5ef725503ce1ae531"
integrity sha512-uIFDxqpRZGZ6ThOk84hEfqWoHx2devRFvpTZcTHur85vImfaxUbTW9Ryh4CpCuDnToOP1CEtXKIgytHBPVff5A==
@@ -5296,16 +5303,16 @@ nice-try@^1.0.4:
resolved "https://registry.yarnpkg.com/nice-try/-/nice-try-1.0.5.tgz#a3378a7696ce7d223e88fc9b764bd7ef1089e366"
integrity sha512-1nh45deeb5olNY7eX82BkPO7SSxR5SSYJiPTrTdFUVYwAl8CKMA5N9PjTYkHiRjisVcxcQ1HXdLhx2qxxJzLNQ==
-nise@^5.1.4:
- version "5.1.9"
- resolved "https://registry.yarnpkg.com/nise/-/nise-5.1.9.tgz#0cb73b5e4499d738231a473cd89bd8afbb618139"
- integrity sha512-qOnoujW4SV6e40dYxJOb3uvuoPHtmLzIk4TFo+j0jPJoC+5Z9xja5qH5JZobEPsa8+YYphMrOSwnrshEhG2qww==
+nise@^6.0.0:
+ version "6.1.1"
+ resolved "https://registry.yarnpkg.com/nise/-/nise-6.1.1.tgz#78ea93cc49be122e44cb7c8fdf597b0e8778b64a"
+ integrity sha512-aMSAzLVY7LyeM60gvBS423nBmIPP+Wy7St7hsb+8/fc1HmeoHJfLO8CKse4u3BtOZvQLJghYPI2i/1WZrEj5/g==
dependencies:
- "@sinonjs/commons" "^3.0.0"
- "@sinonjs/fake-timers" "^11.2.2"
- "@sinonjs/text-encoding" "^0.7.2"
+ "@sinonjs/commons" "^3.0.1"
+ "@sinonjs/fake-timers" "^13.0.1"
+ "@sinonjs/text-encoding" "^0.7.3"
just-extend "^6.2.0"
- path-to-regexp "^6.2.1"
+ path-to-regexp "^8.1.0"
node-abort-controller@^3.1.1:
version "3.1.1"
@@ -5978,10 +5985,10 @@ path-parse@^1.0.6, path-parse@^1.0.7:
resolved "https://registry.yarnpkg.com/path-parse/-/path-parse-1.0.7.tgz#fbc114b60ca42b30d9daf5858e4bd68bbedb6735"
integrity sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==
-path-to-regexp@^6.2.1:
- version "6.2.1"
- resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-6.2.1.tgz#d54934d6798eb9e5ef14e7af7962c945906918e5"
- integrity sha512-JLyh7xT1kizaEvcaXOQwOc2/Yhw6KZOvPf1S8401UyLk86CU79LN3vl7ztXGm/pZ+YjoyAJ4rxmHwbkBXJX+yw==
+path-to-regexp@^8.1.0:
+ version "8.2.0"
+ resolved "https://registry.yarnpkg.com/path-to-regexp/-/path-to-regexp-8.2.0.tgz#73990cc29e57a3ff2a0d914095156df5db79e8b4"
+ integrity sha512-TdrF7fW9Rphjq4RjrW0Kp2AW0Ahwu9sRGTkS6bvDi0SCwZlEZYmcfDbEsTz8RVk0EHIS/Vd1bv3JhG+1xZuAyQ==
path-type@^3.0.0:
version "3.0.0"
@@ -6816,17 +6823,17 @@ signale@^1.2.1:
figures "^2.0.0"
pkg-conf "^2.1.0"
-sinon@^15.1.0:
- version "15.2.0"
- resolved "https://registry.yarnpkg.com/sinon/-/sinon-15.2.0.tgz#5e44d4bc5a9b5d993871137fd3560bebfac27565"
- integrity sha512-nPS85arNqwBXaIsFCkolHjGIkFo+Oxu9vbgmBJizLAhqe6P2o3Qmj3KCUoRkfhHtvgDhZdWD3risLHAUJ8npjw==
+sinon@^18.0.1:
+ version "18.0.1"
+ resolved "https://registry.yarnpkg.com/sinon/-/sinon-18.0.1.tgz#464334cdfea2cddc5eda9a4ea7e2e3f0c7a91c5e"
+ integrity sha512-a2N2TDY1uGviajJ6r4D1CyRAkzE9NNVlYOV1wX5xQDuAk0ONgzgRl0EjCQuRCPxOwp13ghsMwt9Gdldujs39qw==
dependencies:
- "@sinonjs/commons" "^3.0.0"
- "@sinonjs/fake-timers" "^10.3.0"
+ "@sinonjs/commons" "^3.0.1"
+ "@sinonjs/fake-timers" "11.2.2"
"@sinonjs/samsam" "^8.0.0"
- diff "^5.1.0"
- nise "^5.1.4"
- supports-color "^7.2.0"
+ diff "^5.2.0"
+ nise "^6.0.0"
+ supports-color "^7"
slash@^3.0.0:
version "3.0.0"
@@ -7182,7 +7189,7 @@ supports-color@^5.3.0:
dependencies:
has-flag "^3.0.0"
-supports-color@^7.0.0, supports-color@^7.1.0, supports-color@^7.2.0:
+supports-color@^7, supports-color@^7.0.0, supports-color@^7.1.0:
version "7.2.0"
resolved "https://registry.yarnpkg.com/supports-color/-/supports-color-7.2.0.tgz#1b7dcdcb32b8138801b3e478ba6a51caa89648da"
integrity sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==