Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(OTEL): audit of OTEL docs #18627

Merged
merged 32 commits into from
Sep 30, 2024
Merged
Show file tree
Hide file tree
Changes from 23 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
2ffc9f0
fix(OTEL): edits to OTEL intro doc
ally-sassman Sep 10, 2024
550d5d9
Other fixes to overview page, updated yml naming
ally-sassman Sep 10, 2024
3723d47
fix(OTEL): updates to getting started overview
ally-sassman Sep 10, 2024
2a4cb11
fix(OTEL): add tiles
ally-sassman Sep 10, 2024
2f37aaf
Update opentelemetry-get-started-intro.mdx
ally-sassman Sep 10, 2024
a591ab0
update links on data overview page
ally-sassman Sep 10, 2024
029322c
other fixes
ally-sassman Sep 10, 2024
56441c8
Other fixes
ally-sassman Sep 17, 2024
511990b
Update opentelemetry-data-overview.mdx
ally-sassman Sep 26, 2024
7a79fb3
revert yml changes
ally-sassman Sep 26, 2024
ad7b013
uppercase all references of "Collector"
ally-sassman Sep 26, 2024
dfc31bc
improve troubleshooting page
ally-sassman Sep 26, 2024
ed40507
other fixes
ally-sassman Sep 26, 2024
ea790dc
Merge branch 'develop' into OTEL-doc-review
ally-sassman Sep 26, 2024
60bccfc
Update opentelemetry-otlp.mdx
ally-sassman Sep 26, 2024
bfecb66
remove space between api and sdk
ally-sassman Sep 26, 2024
4e31f04
Update opentelemetry-best-practices-traces.mdx
ally-sassman Sep 26, 2024
7533801
remove collector titlecase from non-OTel references
ally-sassman Sep 26, 2024
dc78891
fix last remaining collector references
ally-sassman Sep 26, 2024
af1d7d8
Update src/content/docs/apm/agents/ruby-agent/configuration/ruby-agen…
ally-sassman Sep 26, 2024
0a24074
Update apm-agent-security-python.mdx
ally-sassman Sep 26, 2024
c9a55b2
Merge branch 'OTEL-doc-review' of https://github.com/newrelic/docs-we…
ally-sassman Sep 26, 2024
6f540b5
Update src/content/docs/serverless-function-monitoring/aws-lambda-mon…
ally-sassman Sep 26, 2024
c396eee
Update src/nav/opentelemetry.yml
ally-sassman Sep 26, 2024
5373ff8
Update src/nav/opentelemetry.yml
ally-sassman Sep 26, 2024
8269a0c
update screenshot
ally-sassman Sep 26, 2024
5628c94
Merge branch 'OTEL-doc-review' of https://github.com/newrelic/docs-we…
ally-sassman Sep 26, 2024
f47549b
merge changes from #18683
ally-sassman Sep 27, 2024
275eb36
remove slash from apis/sdks references
ally-sassman Sep 27, 2024
d63d714
add missing lowercased collector references
ally-sassman Sep 27, 2024
5dc2ea5
Apply suggestions from code review
ally-sassman Sep 27, 2024
b725826
fix(OTEL): Fixing a popover
Sep 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ The following instructions will guide you through setting up external services.

If your APM service is connected to an OpenTelemetry service (upstream or downstream), that OpenTelemetry service will not show up in the view for that APM service. This is because, when viewing an APM service, this feature uses metrics which are only reported by APM agents. When viewing an OpenTelemetry service, the APM service will show up as a connection.

The quality of the information you see depends on the sampling strategy you are using in the collector. See the following section about using sampling to control what you see in the UI.
The quality of the information you see depends on the sampling strategy you are using in the Collector. See the following section about using sampling to control what you see in the UI.
ally-sassman marked this conversation as resolved.
Show resolved Hide resolved
ally-sassman marked this conversation as resolved.
Show resolved Hide resolved

<Callout variant="tip">
If you send 100% of your OpenTelemetry data to our Trace API, we store 100% of that data, unless you have a specific rate limit for your organization, or if you send enough data to trigger our default rate limit.
Expand Down Expand Up @@ -348,7 +348,7 @@ The following instructions will guide you through setting up external services.
This section only applies if your services are sending data to New Relic via an OpenTelemetry Collector. This is because the data isn’t being sampled in an OpenTelemetry Collector.
</Callout>

For OpenTelemetry, all external services views are populated by sampled traces, which means that you may not see enough useful data. To resolve this, you can change the sampling in the collector to allow more data into New Relic.
For OpenTelemetry, all external services views are populated by sampled traces, which means that you may not see enough useful data. To resolve this, you can change the sampling in the Collector to allow more data into New Relic.

See [Sampling](https://opentelemetry.io/docs/concepts/sampling/) for more information.
</TabsPageItem>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -156,11 +156,11 @@ The following attributes are required in the APM service entity to display the K

### OpenTelemetry instrumentation [#otel-instrumentation]

The [OpenTelemetry collector](/docs/opentelemetry/get-started/collector-processing/opentelemetry-collector-processing-intro/) offers a Kubernetes attributes processor that enrich APM telemetry with Kubernetes metadata.
The [OpenTelemetry Collector](/docs/opentelemetry/get-started/collector-processing/opentelemetry-collector-processing-intro/) offers a Kubernetes attributes processor that enrich APM telemetry with Kubernetes metadata.

1. You need to define an environment variable in your deployment manifest.

2. Adjust the configuration of the collector to retrieve the appropriate Kubernetes metadata using this APM environment variable.
2. Adjust the configuration of the Collector to retrieve the appropriate Kubernetes metadata using this APM environment variable.

As a result, all the APM metrics and entities will include Kubernetes metadata thanks to the `K8sattributes` processor. For more information, see how to [link your OpenTelemetry applications to Kubernetes](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-otel-applications-kubernetes/).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2130,7 +2130,7 @@ Metrics can also be enriched with extended cloud metadata (including custom reso
id="is_integrations_only"
title="is_integrations_only"
>
Use this setting if you're instrumenting the host from a source other than the infrastructure agent (for example, an OpenTelemetry collector or Prometheus node exporter) and you want to keep using the infrastructure agent on-host integrations to monitor other infrastructure services.
Use this setting if you're instrumenting the host from a source other than the infrastructure agent (for example, an OpenTelemetry Collector or Prometheus node exporter) and you want to keep using the infrastructure agent on-host integrations to monitor other infrastructure services.
When enabled, the agent reports host inventory and integrations telemetry (event metrics and inventory) decorated with host metadata, but host metrics (CPU, memory, disk, network, processes) are disabled.

<table>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,26 +51,26 @@ Before enabling OpenTelemetry in Apache Airflow, you'll need to install the Airf

## Configuration [#configuration]

To send Airflow metrics to New Relic, configure the OpenTelemetry metrics to export data to an [OpenTelemetry collector](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-intro/), which will then forward the data to a New Relic [OTLP endpoint](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#note-endpoints) using a <InlinePopover type="licenseKey"/>.
To send Airflow metrics to New Relic, configure the OpenTelemetry metrics to export data to an [OpenTelemetry Collector](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-intro/), which will then forward the data to a New Relic [OTLP endpoint](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#note-endpoints) using a <InlinePopover type="licenseKey"/>.

<Callout variant="important">
Due to Airflow's current lack of support for sending OpenTelemetry data with authentication headers, the OpenTelemetry collector is essential for authenticating with New Relic.
Due to Airflow's current lack of support for sending OpenTelemetry data with authentication headers, the OpenTelemetry Collector is essential for authenticating with New Relic.
</Callout>

### Configure the OpenTelemetry collector [#configuration-collector]
### Configure the OpenTelemetry Collector [#configuration-collector]

1. Follow the [basic collector example](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-basic/) to set up your OpenTelemetry collector.
2. Configure the collector with your appropriate OTLP endpoint, such as `https://otlp.nr-data.net:4317`.
1. Follow the [basic collector example](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/opentelemetry-collector-basic/) to set up your OpenTelemetry Collector.
ally-sassman marked this conversation as resolved.
Show resolved Hide resolved
ally-sassman marked this conversation as resolved.
Show resolved Hide resolved
2. Configure the Collector with your appropriate OTLP endpoint, such as `https://otlp.nr-data.net:4317`.
3. For authentication, add your <InlinePopover type="licenseKey"/> to the environment variable `NEW_RELIC_LICENSE_KEY` so that it populates the `api-key` header.
4. Ensure port 4318 on the collector is reachable from the running Airflow instance. (For docker, you may need to use a [docker network](https://docs.docker.com/network/).)
5. Launch the collector.
4. Ensure port 4318 on the Collector is reachable from the running Airflow instance. (For docker, you may need to use a [docker network](https://docs.docker.com/network/).)
5. Launch the Collector.

### Configure Airflow metrics [#configuration-airflow]

Airflow sends metrics using OTLP over HTTP, which uses port `4318`. Airflow has multiple methods of [setting configuration options](https://airflow.apache.org/docs/apache-airflow/stable/howto/set-config.html).

<Callout variant="important">
If your environment has Airflow running in a docker container alongside the OpenTelemetry Collector, you will need to change the `otel_host` setting from `localhost` to the container address of the collector.
If your environment has Airflow running in a docker container alongside the OpenTelemetry Collector, you will need to change the `otel_host` setting from `localhost` to the container address of the Collector.
</Callout>

Choose one of the following methods to set the required options for Airflow.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,15 @@ You need to first install the OpenTelemetry plugin from Jenkins:

## Configuration [#configuration]

You need a New Relic [OTLP Endpoint](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#note-endpoints) and an <InlinePopover type="licenseKey"/> to configure the Jenkins OpenTelemetry plugin to send data to New Relic.
You need a New Relic [OTLP endpoint](/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/opentelemetry-setup/#note-endpoints) and an <InlinePopover type="licenseKey"/> to configure the Jenkins OpenTelemetry plugin to send data to New Relic.

<img
title="Screenshot showing Jenkins OpenTelemetry configuration"
alt="Screenshot showing Jenkins OpenTelemetry configuration"
src="/images/opentelemetry_screenshot-full_integrations-jenkins-02.webp"
/>

1. Enter an OTLP Endpoint. For example, `https://otlp.nr-data.net:4317`.
1. Enter an OTLP endpoint. For example, `https://otlp.nr-data.net:4317`.
2. For authentication, select <DNT>**Header Authentication**</DNT>:
a. In the <DNT>**Header Name**</DNT> field, enter <DNT>**api-key**</DNT>.
b. In the <DNT>**Header Value**</DNT> field, enter a secret text containing your New Relic ingest license key.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ After completing the steps below, you can use the New Relic UI to correlate appl

This document walks you through enabling your application to inject infrastructure-specific metadata into the telemetry data. The result is that the New Relic UI is populated with actionable information. Here are the steps you'll take to get started:

* In each application container, define an environment variable to send telemetry data to the collector.
* In each application container, define an environment variable to send telemetry data to the Collector.

* Deploy the OpenTelemetry Collector as a `DaemonSet` in [agent mode](https://opentelemetry.io/docs/collector/getting-started/#agent) with `resourcedetection`, `resource`, `batch`, and `k8sattributes` processors to inject relevant metadata (cluster, deployment, and namespace names).

Expand All @@ -40,7 +40,7 @@ If you have general questions about using collectors with New Relic, see our [In

To set this up, you need to add a custom snippet to the `env` section of your Kubernetes YAML file. The example below shows the snippet for a sample frontend microservice (`Frontend.yaml`). The snippet includes 2 sections that do the following:

* <DNT>**Section 1:**</DNT> Ensure that the telemetry data is sent to the collector. This sets the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. It does this by calling the downward API to pull the host IP.
* <DNT>**Section 1:**</DNT> Ensure that the telemetry data is sent to the Collector. This sets the environment variable `OTEL_EXPORTER_OTLP_ENDPOINT` with the host IP. It does this by calling the downward API to pull the host IP.

* <DNT>**Section 2:**</DNT> Attach infrastructure-specific metadata. To do this, we capture `metadata.uid` using the downward API and add it to the `OTEL_RESOURCE_ATTRIBUTES` environment variable. This environment variable is used by the OpenTelemetry Collector's `resourcedetection` and `k8sattributes` processors to add additional infrastructure-specific context to telemetry data.

Expand All @@ -57,7 +57,7 @@ spec:
- name: yourfrontendservice
image: yourfrontendservice-beta
env:
# Section 1: Ensure that telemetry data is sent to the collector
# Section 1: Ensure that telemetry data is sent to the Collector
- name: HOST_IP
valueFrom:
fieldRef:
Expand All @@ -84,11 +84,11 @@ spec:
value: "service.instance.id=$(POD_NAME),k8s.pod.uid=$(POD_UID)"
```

## Configure and deploy the OpenTelemetry collector as an agent [#agent]
## Configure and deploy the OpenTelemetry Collector as an agent [#agent]

We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level.
We recommend you deploy the [collector as an agent](https://opentelemetry.io/docs/collector/getting-started/#agent) on every node within a Kubernetes cluster. The agent can receive telemetry data, and enrich telemetry data with metadata. For example, the Collector can add custom attributes or infrastructure information through processors, as well as handle batching, retry, compression and additional advanced features that are handled less efficiently at the client instrumentation level.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We recommend you deploy the [Collector as an agent]


For help configuring the collector, see the sample collector configuration file below, along with the sections about setting up these options:
For help configuring the Collector, see the sample collector configuration file below, along with the sections about setting up these options:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see the sample Collector configuration


* [OTLP exporter](#otlp-exporter)
* [batch processor](#batch)
Expand Down Expand Up @@ -154,7 +154,7 @@ service:
exporters: [otlp]
```

Follow these steps to configure and deploy the OpenTelemetry collector as an agent:
Follow these steps to configure and deploy the OpenTelemetry Collector as an agent:

<Steps>
<Step>
Expand All @@ -173,7 +173,7 @@ Follow these steps to configure and deploy the OpenTelemetry collector as an age
<Step>
### Configure the batch processor [#batch]

The batch processor accepts spans, metrics, or logs and places them in batches. This makes it easier to compress data and reduce outbound requests from the collector.
The batch processor accepts spans, metrics, or logs and places them in batches. This makes it easier to compress data and reduce outbound requests from the Collector.

```yaml
processors:
Expand All @@ -184,7 +184,7 @@ Follow these steps to configure and deploy the OpenTelemetry collector as an age
<Step>
### Configure the resource detection processor [#resource-detection]

The `resourcedetection` processor gets host-specific information to add additional context to the telemetry data being processed through the collector. In this example, we use Google Kubernetes Engine (GKE) and Google Compute Engine (GCE) to get Google Cloud-specific metadata, including:
The `resourcedetection` processor gets host-specific information to add additional context to the telemetry data being processed through the Collector. In this example, we use Google Kubernetes Engine (GKE) and Google Compute Engine (GCE) to get Google Cloud-specific metadata, including:

* `cloud.provider` ("gcp")
* `cloud.platform` ("`gcp_compute_engine`")
Expand Down Expand Up @@ -238,9 +238,9 @@ Follow these steps to configure and deploy the OpenTelemetry collector as an age
<Step>
### Configure the Kubernetes Attributes processor (discovery filter) [#discovery-filter]

When running the collector as an agent, you should apply a discovery filter so that the processor only discovers pods from the same host that it is running on. If you don't use a filter, resource usage can be unnecessarily high, especially on very large clusters. Once the filter is applied, each processor will only query the Kubernetes API for pods running on its own node.
When running the Collector as an agent, you should apply a discovery filter so that the processor only discovers pods from the same host that it is running on. If you don't use a filter, resource usage can be unnecessarily high, especially on very large clusters. Once the filter is applied, each processor will only query the Kubernetes API for pods running on its own node.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would write this:
the same host that it's running on.


To set the filter, use the downward API to inject the node name as an environment variable in the pod `env` section of the OpenTelemetry Collector agent configuration YAML file. To see an example, check out [GitHub](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/examples/kubernetes/otel-collector-config.yml). This will inject a new environment variable to the OpenTelemetry collector agent's container. The value will be the name of the node the pod was scheduled to run on.
To set the filter, use the downward API to inject the node name as an environment variable in the pod `env` section of the OpenTelemetry Collector agent configuration YAML file. To see an example, check out [GitHub](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/examples/kubernetes/otel-collector-config.yml). This will inject a new environment variable to the OpenTelemetry Collector agent's container. The value will be the name of the node the pod was scheduled to run on.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd write this:
For an example, see the otel-collector-config.yml file on GitHub.


```yaml
spec:
Expand Down
Loading
Loading