Skip to content

Commit

Permalink
merge changes from #18683
Browse files Browse the repository at this point in the history
  • Loading branch information
ally-sassman committed Sep 27, 2024
1 parent 5628c94 commit f47549b
Showing 1 changed file with 125 additions and 184 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ To successfully complete the steps below, you should already be familiar with Op

* Instrumented your applications with [OpenTelemetry](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/), and successfully sent data to New Relic via OpenTelemetry Protocol (OTLP).

If you have general questions about using collectors with New Relic, see our [Introduction to OpenTelemetry Collector with New Relic](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-intro).
If you have general questions about using Collectors with New Relic, see our [Introduction to OpenTelemetry Collector with New Relic](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-intro).

## Configure your application to send telemetry data to the OpenTelemetry Collector [#instrument]

Expand All @@ -57,7 +57,7 @@ spec:
- name: yourfrontendservice
image: yourfrontendservice-beta
env:
# Section 1: Ensure that telemetry data is sent to the Collector
# Section 1: Ensure that telemetry data is sent to the collector
- name: HOST_IP
valueFrom:
fieldRef:
Expand All @@ -84,201 +84,142 @@ spec:
value: "service.instance.id=$(POD_NAME),k8s.pod.uid=$(POD_UID)"
```
## Configure and deploy the OpenTelemetry Collector as an agent [#agent]
We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the Collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level.
For help configuring the Collector, see the sample collector configuration file below, along with the sections about setting up these options:
* [OTLP exporter](#otlp-exporter)
* [batch processor](#batch)
* [resourcedetection processor](#resource-detection)
* [k8sattributes processor: general](#attributes-general)
* [k8sattributes processor: RBAC](#rbac)
* [k8sattributes processor: filters](#discovery-filter)
```yaml
receivers:
otlp:
protocols:
grpc:

processors:
batch:
resource:
attributes:
- key: host.id
from_attribute: host.name
action: upsert
resourcedetection:
detectors: [gke, gce]
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
filter:
node_from_env_var: KUBE_NODE_NAME
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.cluster.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
pod_association:
- from: resource_attribute
name: k8s.pod.uid

exporters:
otlp:
endpoint: $OTEL_EXPORTER_OTLP_ENDPOINT
headers:
api-key: $NEW_RELIC_API_KEY
logging:
logLevel: DEBUG

service:
pipelines:
metrics:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, resource, cumulativetodelta, batch]
exporters: [otlp]
traces:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, resource, batch]
exporters: [otlp]
logs:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, resource, batch]
exporters: [otlp]
```
Follow these steps to configure and deploy the OpenTelemetry Collector as an agent:
<Steps>
<Step>
### Configure the OTLP exporter [#otlp-exporter]
First, add an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic <InlinePopover type="licenseKey"/> as a header.
```yaml
exporters:
otlp:
endpoint: $OTEL_EXPORTER_OTLP_ENDPOINT
headers: api-key: $NEW_RELIC_API_KEY
```
</Step>
<Step>
### Configure the batch processor [#batch]
The batch processor accepts spans, metrics, or logs and places them in batches. This makes it easier to compress data and reduce outbound requests from the Collector.
```yaml
processors:
batch:
```
</Step>
<Step>
### Configure the resource detection processor [#resource-detection]
The `resourcedetection` processor gets host-specific information to add additional context to the telemetry data being processed through the Collector. In this example, we use Google Kubernetes Engine (GKE) and Google Compute Engine (GCE) to get Google Cloud-specific metadata, including:

* `cloud.provider` ("gcp")
* `cloud.platform` ("`gcp_compute_engine`")
* `cloud.account.id`
* `cloud.region`
* `cloud.availability_zone`
* `host.id`
* `host.image.id`
* `host.type`

```yaml
processors:
resourcedetection:
detectors: [gke, gce]
```
</Step>

<Step>
### Configure the Kubernetes Attributes processor (general) [#attributes-general]

When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects the IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent).

```yaml
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
## Configure and deploy the OpenTelemetry Collector [#configure-otel-collector]
We recommend you deploy the [Collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the Collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level.
You can choose one of these options to monitor your cluster:
* **(Recommended) [Install your Kubernetes cluster using OpenTelemetry](/docs/kubernetes-pixie/kubernetes-integration/installation/k8s-otel/#install)**: This option automatically deploys the Collector as an agent. Everything will work right out of the box, you'll have the Kubernetes metadata in the APM telemetry and the Kubernetes UIs.
* **Manual configuration and deployment**: If you prefer to configure it manually, follow these steps:
<Steps>
<Step>
### Configure the OTLP exporter
Add an OTLP exporter to your [OpenTelemetry Collector configuration YAML file](https://opentelemetry.io/docs/collector/configuration/) along with your New Relic <InlinePopover type="licenseKey"/> as a header.
```yaml
exporters:
otlp:
endpoint: $OTEL_EXPORTER_OTLP_ENDPOINT
headers: api-key: $NEW_RELIC_API_KEY
```
</Step>
<Step>
### Configure the batch processor
The batch processor accepts spans, metrics, or logs and places them in batches. This makes it easier to compress data and reduce outbound requests from the Collector.
```yaml
processors:
batch:
```
</Step>
<Step>
### Configure the resource detection processor
The `resourcedetection` processor gets host-specific information to add additional context to the telemetry data being processed through the Collector. In this example, we use Google Kubernetes Engine (GKE) and Google Compute Engine (GCE) to get Google Cloud-specific metadata, including:

* `cloud.provider` ("gcp")
* `cloud.platform` ("`gcp_compute_engine`")
* `cloud.account.id`
* `cloud.region`
* `cloud.availability_zone`
* `host.id`
* `host.image.id`
* `host.type`

```yaml
processors:
resourcedetection:
detectors: [gke, gce]
```

</Step>
<Step>
### Configure the Kubernetes attributes processor (general)

When we run the `k8sattributes` processor as part of the OpenTelemetry Collector running as an agent, it detects the IP addresses of pods sending telemetry data to the OpenTelemetry Collector agent, using them to extract pod metadata. Below is a basic Kubernetes manifest example with only a processors section. To deploy the OpenTelemetry Collector as a `DaemonSet`, read this [comprehensive manifest example](https://github.com/newrelic-forks/microservices-demo/tree/main/src/otel-collector-agent).

```yaml
processors:
k8sattributes:
auth_type: "serviceAccount"
passthrough: false
filter:
node_from_env_var: KUBE_NODE_NAME
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.cluster.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
pod_association:
- from: resource_attribute
name: k8s.pod.uid
```

</Step>
<Step>
### Configure the Kubernetes attributes processor (RBAC)

You need to add configurations for role-based access control (RBAC). The `k8sattributes` processor needs `get`, `watch`, and `list` permissions for pods and namespaces resources included in the configured filters. This [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) shows how to configure role-based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster.
</Step>
<Step>
### Configure the Kubernetes attributes processor (discovery filter)

When running the Collector as an agent, you should apply a discovery filter so that the processor only discovers pods from the same host that it's running on. If you don't use a filter, resource usage can be unnecessarily high, especially on very large clusters. Once the filter is applied, each processor will only query the Kubernetes API for pods running on its own node.

To set the filter, use the downward API to inject the node name as an environment variable in the pod `env` section of the OpenTelemetry Collector agent configuration YAML file. For an example, see the [`otel-collector-config.yml`](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/examples/kubernetes/otel-collector-config.yml) file on GitHub. This will inject a new environment variable to the OpenTelemetry Collector agent's container. The value will be the name of the node the pod was scheduled to run on.

```yaml
spec:
containers:
- env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
```

Then, you can filter by the node with the `k8sattributes`:

```yaml
k8sattributes:
filter:
node_from_env_var: KUBE_NODE_NAME
extract:
metadata:
- k8s.pod.name
- k8s.pod.uid
- k8s.deployment.name
- k8s.cluster.name
- k8s.namespace.name
- k8s.node.name
- k8s.pod.start_time
pod_association:
- from: resource_attribute
name: k8s.pod.uid
```
</Step>

<Step>
### Configure the Kubernetes Attributes processor (RBAC) [#rbac]

You need to add configurations for role-based access control (RBAC). The `k8sattributes` processor needs `get`, `watch`, and `list` permissions for pods and namespaces resources included in the configured filters. This [example](https://github.com/newrelic-forks/microservices-demo/blob/main/otel-kubernetes-manifests/otel-collector-agent.yaml#L43-L69) shows how to configure role-based access control (RBAC) for `ClusterRole` to give a `ServiceAccount` the necessary permissions for all pods and namespaces in the cluster.
</Step>

<Step>
### Configure the Kubernetes Attributes processor (discovery filter) [#discovery-filter]

When running the Collector as an agent, you should apply a discovery filter so that the processor only discovers pods from the same host that it is running on. If you don't use a filter, resource usage can be unnecessarily high, especially on very large clusters. Once the filter is applied, each processor will only query the Kubernetes API for pods running on its own node.

To set the filter, use the downward API to inject the node name as an environment variable in the pod `env` section of the OpenTelemetry Collector agent configuration YAML file. To see an example, check out [GitHub](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/examples/kubernetes/otel-collector-config.yml). This will inject a new environment variable to the OpenTelemetry Collector agent's container. The value will be the name of the node the pod was scheduled to run on.

```yaml
spec:
containers:
- env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
```

Then, you can filter by the node with the `k8sattributes`:

```yaml
k8sattributes:
filter:
node_from_env_var: KUBE_NODE_NAME
```
</Step>
</Steps>
node_from_env_var: KUBE_NODE_NAME
```
</Step>
</Steps>

## Validate that your configurations are working [#validate]

You should be able to verify that your configurations are working once you have successfully linked your OpenTelemetry data with your Kubernetes data.

1. Go to <DNT>**[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > APM & Services**</DNT> and select your application inside <DNT>**Services - OpenTelemetry**</DNT>.

2. Under the **Monitor** section, click **Service map**.
2. Click <DNT>**Kubernetes**</DNT> on the left navigation pane.

3. See the connection between your OpenTelemetry sevice and your cluster. On the right, you'll see the Kubernetes attributes like `k8s.cluster.name`, `k8s.namespace.name`, and `k8s.deployment.name` as tags.

<img
src="/images/kubernetes_screenshot-crop_otel-validation.webp"
title="Screenshot showing K8s metadata in the New Relic UI"
alt="Screenshot showing K8s metadata in the New Relic UI"
title="Kubernetes page"
alt="This is an image of the Kubernetes APM page"
src="/images/apm_screenshot-crop_k8-apm-ui.webp"
/>

<figcaption>
Go to <DNT>**[one.newrelic.com > All capabilities](https://one.newrelic.com/all-capabilities) > APM & Services > (selected app) > Kubernetes**</DNT>
</figcaption>


## Choose your next step [#next]

<DocTiles>
Expand Down

0 comments on commit f47549b

Please sign in to comment.