Skip to content

Commit

Permalink
feat: UI tweaks (#332)
Browse files Browse the repository at this point in the history
* fix: help button treatment

* feat: revise worker docs

* feat: worker empty state

* fix: docs build

* chore: fix linting

---------

Co-authored-by: gabriel ruttner <[email protected]>
  • Loading branch information
grutt and grutt authored Apr 3, 2024
1 parent d659b08 commit 7ab7290
Show file tree
Hide file tree
Showing 3 changed files with 133 additions and 21 deletions.
38 changes: 37 additions & 1 deletion frontend/app/src/pages/main/workers/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,27 @@ import { Separator } from '@/components/ui/separator';
import { useQuery } from '@tanstack/react-query';
import { queries } from '@/lib/api';
import invariant from 'tiny-invariant';
import { relativeDate } from '@/lib/utils';
import { cn, relativeDate } from '@/lib/utils';
import { Link, useOutletContext } from 'react-router-dom';
import { Button } from '@/components/ui/button';
import { Loading } from '@/components/ui/loading.tsx';
import { TenantContextType } from '@/lib/outlet';
import {
Card,
CardHeader,
CardTitle,
CardDescription,
CardFooter,
} from '@/components/ui/card';
import { QuestionMarkCircleIcon } from '@heroicons/react/24/outline';

export default function Workers() {
const { tenant } = useOutletContext<TenantContextType>();
invariant(tenant);

const listWorkersQuery = useQuery({
...queries.workers.list(tenant.metadata.id),
refetchInterval: 5000,
});

if (listWorkersQuery.isLoading || !listWorkersQuery.data?.rows) {
Expand All @@ -28,6 +37,33 @@ export default function Workers() {
</h2>
<Separator className="my-4" />
{/* Grid of workers */}
{listWorkersQuery.data?.rows.length === 0 && (
<Card className="w-full">
<CardHeader>
<CardTitle>No Active Workers</CardTitle>
<CardDescription>
<p className="text-gray-300 mb-4">
There are no worker processes currently running and connected
to the Hatchet engine for this tenant. To enable workflow
execution, please attempt to start a worker process or{' '}
<a href="[email protected]">contact support</a>.
</p>
</CardDescription>
</CardHeader>
<CardFooter>
<a
href="https://docs.hatchet.run/home/basics/workers"
className="flex flex-row item-center"
>
<Button onClick={() => {}} variant="link" className="p-0 w-fit">
<QuestionMarkCircleIcon className={cn('h-4 w-4 mr-2')} />
Docs: Understanding Workers in Hatchet
</Button>
</a>
</CardFooter>
</Card>
)}

<div className="grid grid-cols-1 gap-4 sm:grid-cols-2 lg:grid-cols-3">
{listWorkersQuery.data?.rows.map((worker) => (
<div
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ import { StepRunLogs } from './step-run-logs';
import { RunStatus } from '../../components/run-statuses';
import { DataTable } from '@/components/molecules/data-table/data-table';
import { columns } from '../../components/workflow-runs-columns';
import { XMarkIcon } from '@heroicons/react/24/outline';
import { QuestionMarkCircleIcon, XMarkIcon } from '@heroicons/react/24/outline';

export function StepRunPlayground({
stepRun,
Expand Down Expand Up @@ -456,7 +456,15 @@ export function StepRunPlayground({
</>
</Button>
<a href="https://docs.hatchet.run/home/features/cancellation">
Beta: How to handle cancelation signaling
<Button
onClick={handleOnCancel}
variant="link"
className="p-0 w-fit"
asChild
>
<QuestionMarkCircleIcon className={cn('h-4 w-4 mr-2')} />
Help: How to handle cancelation signaling
</Button>
</a>
</>
)}
Expand Down
104 changes: 86 additions & 18 deletions frontend/docs/pages/home/basics/workers.mdx
Original file line number Diff line number Diff line change
@@ -1,37 +1,105 @@
{/* TODO revise this page */}
import { Callout, Card, Cards, Steps, Tabs } from "nextra/components";

# Workers in Hatchet

While Hatchet manages the scheduling and orchestration, the workers are the entities that actually execute the individual steps defined within your workflows. Understanding how to deploy and manage these workers efficiently is key to leveraging Hatchet for distributed task execution.
Workers are the backbone of Hatchet, responsible for executing the individual steps defined within your workflows. They operate autonomously across different nodes in your infrastructure, allowing for distributed and scalable task execution. Understanding how to deploy and manage workers effectively is crucial to fully leverage the power of Hatchet.

## Overview of Workers
## How Workers Operate

Workers in Hatchet are long-lived processes that await instructions from the Hatchet engine to execute specific steps. They are the muscle behind the brain, where Hatchet acts as the brain orchestrating what needs to be done and the workers carry out those tasks. Here's what you need to understand about workers:
In Hatchet, workers are long-running processes that wait for instructions from the Hatchet engine to execute specific steps. They communicate with the Hatchet engine to receive tasks, execute them, and report back the results.

- **Autonomy:** Workers operate independently across different nodes in your infrastructure, which can be spread across multiple systems or even different cloud environments.
- **Technology Agnostic:** Workers can be written in different programming languages or technologies, provided they can communicate with the Hatchet engine and execute the required steps.
- **Scalability:** You can scale your system horizontally by adding more workers, enabling Hatchet to distribute tasks across a wider set of resources and handle increased loads efficiently.
Here are the key characteristics of workers in Hatchet:

When you define a workflow in Hatchet, you register the steps or workflows that that node is capable of executing. The Hatchet engine then schedules these steps and assigns them to available workers for execution. The workers receive the instructions from the Hatchet engine, execute the steps, and report back the results to the engine when complete.
1. **Distributed Execution**: Workers can be deployed across multiple systems or even different cloud environments, enabling distributed task execution.

## Best Practices for Workers
2. **Language Agnostic**: Workers can be implemented in various programming languages, as long as they can communicate with the Hatchet engine and execute the required steps.

To ensure that your Hatchet implementation is robust, scalable, and efficient, adhere to these best practices for setting up and managing your workers:
3. **Scalability**: By adding more workers, you can scale your system horizontally to handle increased loads and distribute tasks across a wider set of resources.

1. **Reliable Execution Environment:** Deploy your workers in a stable and reliable environment. Ensure that they have sufficient resources to execute the tasks without running into resource contention or other environmental issues.
## Registering Workflows and Starting Workers

2. **Monitoring and Logging:** Implement robust monitoring and logging for your workers. Keeping track of worker health, performance, and task execution status is crucial for identifying issues and optimizing performance.
To utilize workers effectively, you need to register your workflows with the worker and start the worker process. Here's how you can do it in different programming languages:

3. **Graceful Error Handling:** Design your workers to handle errors gracefully. They should be able to report execution failures back to Hatchet and, when possible, retry execution based on the configured policies.
<Tabs items={['Python', 'Typescript', 'Go']}>
<Tabs.Tab>
```python
workflow = MyWorkflow()
worker = hatchet.worker('test-worker', max_runs=4)
worker.register_workflow(workflow)
worker.start()
```
</Tabs.Tab>
<Tabs.Tab>
```typescript
async function main() {
const worker = await hatchet.worker('example-worker');
await worker.registerWorkflow(workflow);
worker.start();
}

4. **Secure Communication:** Ensure that the communication between your workers and the Hatchet engine is secure, particularly if they are distributed across different networks or environments.
main();

5. **Lifecycle Management:** Implement proper lifecycle management for your workers. They should be able to restart automatically in case of critical failures and should support graceful shutdown procedures for maintenance or scaling operations.
````
</Tabs.Tab>
<Tabs.Tab>
```go
client, err := client.New(
client.InitWorkflows(),
client.WithWorkflows([]*types.Workflow{
&slackWorkflowFile,
}),
)
if err != nil {
panic(err)
}
6. **Scalability Practices:** Plan for scalability by designing your system to easily add or remove workers based on demand. This might involve using containerization, orchestration tools, or cloud auto-scaling features.
worker, err := worker.NewWorker(
worker.WithClient(
client,
),
worker.WithIntegration(
slackInt,
),
)
if err != nil {
panic(err)
}
7. **Consistent Updates:** Keep your worker implementations up to date with the latest Hatchet SDKs and ensure that they are compatible with the version of the Hatchet engine you are using.
interruptCtx, cancel := cmdutils.InterruptContextFromChan(cmdutils.InterruptChan())
defer cancel()
go worker.Start()
````
</Tabs.Tab>
</Tabs>
In the above examples:
1. We create an instance of the worker, specifying a unique identifier for the worker.
2. We register the workflow(s) that the worker is capable of executing using the `registerWorkflow` method.
3. Finally, we start the worker process using the `start` method, allowing it to begin listening for tasks from the Hatchet engine.
Run your worker process from command line with relevant environment variables set. Refer to the [quick start](https://docs.hatchet.run/home/quickstart/first-workflow) for more details on how to set up your worker.
## Best Practices for Managing Workers
To ensure a robust and efficient Hatchet implementation, consider the following best practices when managing your workers:
1. **Reliability**: Deploy workers in a stable environment with sufficient resources to avoid resource contention and ensure reliable execution.
2. **Monitoring and Logging**: Implement robust monitoring and logging mechanisms to track worker health, performance, and task execution status.
3. **Error Handling**: Design workers to handle errors gracefully, report execution failures to Hatchet, and retry tasks based on configured policies.
4. **Secure Communication**: Ensure secure communication between workers and the Hatchet engine, especially when distributed across different networks.
5. **Lifecycle Management**: Implement proper lifecycle management for workers, including automatic restarts on critical failures and graceful shutdown procedures.
6. **Scalability**: Plan for scalability by designing your system to easily add or remove workers based on demand, leveraging containerization, orchestration tools, or cloud auto-scaling features.
7. **Consistent Updates**: Keep worker implementations up to date with the latest Hatchet SDKs and ensure compatibility with the Hatchet engine version.
## Conclusion
While Hatchet is responsible for the high-level orchestration and scheduling of workflows and steps, workers are the essential components that execute the tasks on the ground. By deploying well-managed, efficient workers, you can ensure that your Hatchet-powered system is reliable, scalable, and capable of meeting your distributed task execution needs. Remember, a strong foundation of robust workers is key to harnessing the full capabilities of Hatchet.
Workers are the essential components that execute tasks orchestrated by the Hatchet engine. By deploying well-managed and efficient workers, you can ensure a reliable, scalable, and high-performing distributed task execution system. Remember to follow best practices and leverage the features provided by Hatchet to build a robust and efficient worker infrastructure.

0 comments on commit 7ab7290

Please sign in to comment.