Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Elastic Agent] Docker Integration: Error while extracting container ID from source path: index is out of range for field 'log.file.path' #34102

Closed
renzedj opened this issue Dec 22, 2022 · 2 comments
Labels
needs_team Indicates that the issue/PR needs a Team:* label Stalled

Comments

@renzedj
Copy link

renzedj commented Dec 22, 2022

Configuration

Using Elastic Cloud 8.5.3/Elastic Agent 8.5.3/Docker integration 2.3.0

Issue

I am attempting to monitor docker container logs and metrics using the Fleet-managed Docker integration with Elastic Agent. The metrics are ingesting as expected, however logs are not being ingested as expected. I've validated that container logs are where the integration expects them to be (/var/lib/docker/containers/${docker.container.id}/*-json.log).

When I set the log level to debug for the Elastic Agent, I see the following in the agent logs:

12:47:01.468
elastic_agent.filebeat
[elastic_agent.filebeat][debug] Error while extracting container ID from source path: index is out of range for field 'log.file.path'

The inputs section of the policy being generated by Fleet is:

inputs:
  - id: docker/metrics-docker-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
    name: docker
    revision: 9
    type: docker/metrics
    use_output: default
    meta:
      package:
        name: docker
        version: 2.3.0
    data_stream:
      namespace: default
    package_policy_id: 89b8c9ff-d930-407e-97ce-d2dc253b5fe6
    streams:
      - id: docker/metrics-docker.container-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.container
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - container
        labels.dedot: true
      - id: docker/metrics-docker.cpu-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.cpu
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - cpu
        labels.dedot: true
      - id: docker/metrics-docker.diskio-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.diskio
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - diskio
        labels.dedot: true
        skip_major:
          - 9
          - 253
      - id: docker/metrics-docker.event-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.event
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - event
        labels.dedot: true
      - id: docker/metrics-docker.healthcheck-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.healthcheck
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - healthcheck
        labels.dedot: true
      - id: docker/metrics-docker.info-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.info
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - info
      - id: docker/metrics-docker.memory-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.memory
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - memory
        labels.dedot: true
      - id: docker/metrics-docker.network-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
        data_stream:
          dataset: docker.network
          type: metrics
        period: 10s
        hosts:
          - 'unix:///var/run/docker.sock'
        metricsets:
          - network
        labels.dedot: true
  - id: filestream-docker-89b8c9ff-d930-407e-97ce-d2dc253b5fe6
    name: docker
    revision: 9
    type: filestream
    use_output: default
    meta:
      package:
        name: docker
        version: 2.3.0
    data_stream:
      namespace: default
    package_policy_id: 89b8c9ff-d930-407e-97ce-d2dc253b5fe6
    streams:
      - id: 'docker-container-logs-${docker.container.name}-${docker.container.id}'
        data_stream:
          dataset: docker.container_logs
          type: logs
        paths:
          - '/var/lib/docker/containers/${docker.container.id}/*-json.log'
        parsers:
          - container:
              stream: all
              format: docker
        processors: null

Workaround

I found a similar issue for Kubernetes (#27216), so based on that, I added the following to the processors in my fleet-managed configuration:

- add_docker_metadata:
    match_source_index: 4

I still receive this error on startup (presumably from the default filebeat configuration), however logs are now ingesting as expected.

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Dec 22, 2022
@botelastic
Copy link

botelastic bot commented Dec 22, 2022

This issue doesn't have a Team:<team> label.

@renzedj renzedj changed the title [Elastic Agent] [Elastic Agent] Docker Integration: Error while extracting container ID from source path: index is out of range for field 'log.file.path' Dec 22, 2022
@botelastic
Copy link

botelastic bot commented Dec 22, 2023

Hi!
We just realized that we haven't looked into this issue in a while. We're sorry!

We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1.
Thank you for your contribution!

@botelastic botelastic bot added the Stalled label Dec 22, 2023
@botelastic botelastic bot closed this as completed Jun 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs_team Indicates that the issue/PR needs a Team:* label Stalled
Projects
None yet
Development

No branches or pull requests

1 participant