diff --git a/example/docker-compose/readme.md b/example/docker-compose/readme.md index b125bd5ee8c..3d0e53eaf29 100644 --- a/example/docker-compose/readme.md +++ b/example/docker-compose/readme.md @@ -30,25 +30,26 @@ Tempo can be run with local storage, S3, Azure, or GCS backends. See below for e In this example all data is stored locally in the example-data/tempo folder. 1. First start up the local stack. -``` +```console docker-compose up -d ``` At this point, the following containers should be spun up - ```console -$ docker-compose -f docker-compose.local.yaml ps +docker-compose ps +``` +``` Name Command State Ports -------------------------------------------------------------------------------------------------------------------------------------- docker-compose_grafana_1 /run.sh Up 0.0.0.0:3000->3000/tcp docker-compose_prometheus_1 /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp docker-compose_synthetic-load-generator_1 ./start.sh Up -docker-compose_tempo-query_1 /go/bin/query-linux --grpc ... Up 0.0.0.0:16686->16686/tcp docker-compose_tempo_1 /tempo -storage.trace.back ... Up 0.0.0.0:32772->14268/tcp, 0.0.0.0:32773->3100/tcp ``` 2. If you're interested you can see the wal/blocks as they are being created. -``` +```console ls ./example-data/tempo ``` @@ -56,18 +57,19 @@ ls ./example-data/tempo ```console docker-compose logs -f synthetic-load-generator -. -. +``` +``` synthetic-load-generator_1 | 20/10/24 08:27:09 INFO ScheduledTraceGenerator: Emitted traceId 57aedb829f352625 for service frontend route /product synthetic-load-generator_1 | 20/10/24 08:27:09 INFO ScheduledTraceGenerator: Emitted traceId 25fa96b9da24b23f for service frontend route /cart synthetic-load-generator_1 | 20/10/24 08:27:09 INFO ScheduledTraceGenerator: Emitted traceId 15b3ad814b77b779 for service frontend route /shipping synthetic-load-generator_1 | 20/10/24 08:27:09 INFO ScheduledTraceGenerator: Emitted traceId 3803db7d7d848a1a for service frontend route /checkout -. ``` Logs are in the form -`Emitted traceId for service frontend route /cart` +``` +Emitted traceId for service frontend route /cart +``` Copy one of these trace ids. @@ -84,45 +86,46 @@ docker-compose down -v In this example tempo is configured to write data to S3 via MinIO which presents an S3 compatible API. 1. First start up the s3 stack. -``` +```console docker-compose -f docker-compose.s3.minio.yaml up -d ``` At this point, the following containers should be spun up - ```console -$ docker-compose ps +docker-compose ps +``` +``` Name Command State Ports ------------------------------------------------------------------------------------------------------------------------------------- docker-compose_grafana_1 /run.sh Up 0.0.0.0:3000->3000/tcp docker-compose_minio_1 sh -euc mkdir -p /data/tem ... Up 0.0.0.0:9000->9000/tcp docker-compose_prometheus_1 /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp docker-compose_synthetic-load-generator_1 ./start.sh Up -docker-compose_tempo-query_1 /go/bin/query-linux --grpc ... Up 0.0.0.0:16686->16686/tcp docker-compose_tempo_1 /tempo -config.file=/etc/t ... Up 0.0.0.0:32770->14268/tcp, 0.0.0.0:3100->3100/tcp ``` 2. If you're interested you can see the wal/blocks as they are being created. Navigate to minio at http://localhost:9000 and use the username/password of `tempo`/`supersecret`. - 3. The synthetic-load-generator is now printing out trace ids it's flushing into Tempo. To view its logs use - ```console docker-compose logs -f synthetic-load-generator -. -. +``` +``` synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 48367daf25266daa for service frontend route /currency synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 10e50d2aca58d5e7 for service frontend route /cart synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 51a4ac1638ee4c63 for service frontend route /shipping synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 1219370c6a796a6d for service frontend route /product -. ``` Logs are in the form + ``` Emitted traceId for service frontend route /cart ``` + Copy one of these trace ids. 4. Navigate to [Grafana](http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Tempo%22,%7B%7D%5D) and paste the trace id to request it from Tempo. @@ -138,14 +141,17 @@ docker-compose -f docker-compose.s3.minio.yaml down -v In this example tempo is configured to write data to Azure via Azurite which presents an Azure compatible API. 1. First start up the azure stack. -``` + +```console docker-compose -f docker-compose.azure.azurite.yaml up -d ``` At this point, the following containers should be spun up - ```console -$ docker-compose -f docker-compose.azure.azurite.yaml ps +docker-compose -f docker-compose.azure.azurite.yaml ps +``` +``` Name Command State Ports -------------------------------------------------------------------------------------------------------------------------------------- docker-compose_azure-cli_1 az storage container creat ... Exit 0 @@ -153,30 +159,29 @@ docker-compose_azurite_1 docker-entrypoint.sh azuri ... Up docker-compose_grafana_1 /run.sh Up 0.0.0.0:3000->3000/tcp docker-compose_prometheus_1 /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp docker-compose_synthetic-load-generator_1 ./start.sh Up -docker-compose_tempo-query_1 /go/bin/query-linux --grpc ... Up 0.0.0.0:16686->16686/tcp docker-compose_tempo_1 /tempo -config.file=/etc/t ... Up 0.0.0.0:32768->14268/tcp, 0.0.0.0:3100->3100/tcp ``` 2. If you're interested you can see the wal/blocks as they are being created. Check [Azure Storage Explorer](https://azure.microsoft.com/en-us/features/storage-explorer/) and [Azurite docs](https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azurite). - 3. The synthetic-load-generator is now printing out trace ids it's flushing into Tempo. To view its logs use - ```console docker-compose logs -f synthetic-load-generator -. -. +``` +``` synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 48367daf25266daa for service frontend route /currency synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 10e50d2aca58d5e7 for service frontend route /cart synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 51a4ac1638ee4c63 for service frontend route /shipping synthetic-load-generator_1 | 20/10/24 08:26:55 INFO ScheduledTraceGenerator: Emitted traceId 1219370c6a796a6d for service frontend route /product -. ``` Logs are in the form + ``` Emitted traceId for service frontend route /cart ``` + Copy one of these trace ids. 4. Navigate to [Grafana](http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Tempo%22,%7B%7D%5D) and paste the trace id to request it from Tempo. @@ -194,30 +199,33 @@ This example presents a complete setup using Loki to process all container logs, 1. First we have to install the Loki docker driver. This allows applications in our docker-compose to ship their logs to Loki. -``` +```console docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions ``` 2. Next start up the Loki stack. -``` + +```console docker-compose -f docker-compose.loki.yaml up -d ``` At this point, the following containers should be spun up - ```console -$ docker-compose -f docker-compose.loki.yaml ps +docker-compose -f docker-compose.loki.yaml ps +``` +``` Name Command State Ports ------------------------------------------------------------------------------------------------ docker-compose_grafana_1 /run.sh Up 0.0.0.0:3000->3000/tcp docker-compose_loki_1 /usr/bin/loki -config.file ... Up 0.0.0.0:3100->3100/tcp docker-compose_prometheus_1 /bin/prometheus --config.f ... Up 0.0.0.0:9090->9090/tcp -docker-compose_tempo-query_1 /go/bin/query-linux --grpc ... Up 0.0.0.0:16686->16686/tcp docker-compose_tempo_1 /tempo -storage.trace.back ... Up 0.0.0.0:32774->14268/tcp ``` 3. Navigate to [Grafana](http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Loki%22,%7B%7D%5D) and **query Loki a few times to generate some traces** (this setup does not use the synthetic load generator and all traces are generated from Loki). Something like the below works, but feel free to explore other options! + ``` {container_name="dockercompose_loki_1"} ``` diff --git a/example/helm/readme.md b/example/helm/readme.md index fba576775dd..fa1ab0e998f 100644 --- a/example/helm/readme.md +++ b/example/helm/readme.md @@ -8,7 +8,6 @@ better at demonstrating trace discovery flows using Loki and other tools. If you're convinced this is the place for you then keep reading! ### Initial Steps - To test the Helm example locally requires: - k3d > v3.2.0 @@ -20,11 +19,16 @@ Create a cluster k3d cluster create tempo --api-port 6443 --port "16686:80@loadbalancer" ``` +If you wish to use a local image, you can import these into k3d + +```console +k3d image import grafana/tempo:latest --cluster tempo +``` + Next either deploy the microservices or the single binary. ### Microservices -The microservices deploy of Tempo is fault tolerant, high volume, independently scalable. This jsonnet is in use by -Grafana to run our hosted Tempo offering. +The microservices deploy of Tempo is fault tolerant, high volume, independently scalable. ```console # double check you're applying to your local k3d before running this! @@ -48,9 +52,12 @@ kubectl create -f single-binary-extras.yaml ### View a trace After the applications are running check the load generators logs + ```console -kc logs synthetic-load-generator-??? -... +# you can find the exact pod name using `kubectl get pods` +kubectl logs synthetic-load-generator-??? +``` +``` 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId e9f4add3ac7c7115 for service frontend route /product 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId 3890ea9c4d7fab00 for service frontend route /cart 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId c36fc5169bf0693d for service frontend route /cart diff --git a/example/tk/readme.md b/example/tk/readme.md index 41427d80f21..4f114735805 100644 --- a/example/tk/readme.md +++ b/example/tk/readme.md @@ -8,18 +8,23 @@ better at demonstrating trace discovery flows using Loki and other tools. If you're convinced this is the place for you then keep reading! ### Initial Steps - The Jsonnet is meant to be applied to with [tanka](https://github.com/grafana/tanka). To test the jsonnet locally requires: - k3d > v3.2.0 - tanka > v0.12.0 -Create a cluster +Create a cluster with 3 nodes ```console k3d cluster create tempo --api-port 6443 --port "16686:80@loadbalancer" ``` +If you wish to use a local image, you can import these into k3d + +```console +k3d image import grafana/tempo:latest --cluster tempo +``` + Next either deploy the microservices or the single binary. ### Microservices @@ -42,9 +47,12 @@ tk apply tempo-single-binary ### View a trace After the applications are running check the load generators logs + ```console -kc logs synthetic-load-generator-??? -... +# you can find the exact pod name using `kubectl get pods` +kubectl logs synthetic-load-generator-??? +``` +``` 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId e9f4add3ac7c7115 for service frontend route /product 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId 3890ea9c4d7fab00 for service frontend route /cart 20/03/03 21:30:01 INFO ScheduledTraceGenerator: Emitted traceId c36fc5169bf0693d for service frontend route /cart @@ -55,6 +63,7 @@ kc logs synthetic-load-generator-??? Extract a trace id and view it in your browser at `http://localhost:16686/trace/` ### Clean up + ```console k3d cluster delete tempo ``` diff --git a/example/tk/tempo-microservices/main.jsonnet b/example/tk/tempo-microservices/main.jsonnet index 5ab1f5495f9..08b932bffa0 100644 --- a/example/tk/tempo-microservices/main.jsonnet +++ b/example/tk/tempo-microservices/main.jsonnet @@ -27,7 +27,7 @@ minio + load + tempo { }, }, }, - vulture+:{ + vulture+: { replicas: 0, }, backend: 's3', @@ -68,12 +68,36 @@ minio + load + tempo { tempo_ingester_container+:: $.util.resourcesRequests('500m', '500Mi'), + // clear affinity so we can run multiple ingesters on a single node + tempo_ingester_statefulset+: { + spec+: { + template+: { + spec+: { + affinity: {} + } + } + } + }, + tempo_querier_container+:: $.util.resourcesRequests('500m', '500Mi'), tempo_query_frontend_container+:: $.util.resourcesRequests('300m', '500Mi'), + // clear affinity so we can run multiple instances of memcached on a single node + memcached_all+: { + statefulSet+: { + spec+: { + template+: { + spec+: { + affinity: {} + } + } + } + } + }, + local ingress = $.extensions.v1beta1.ingress, ingress: ingress.new() +