Skip to content

Commit

Permalink
Merge branch '8.x' into mergify/bp/8.x/pr-41762
Browse files Browse the repository at this point in the history
  • Loading branch information
jlind23 authored Dec 30, 2024
2 parents ef081b9 + b4d2d24 commit d5981c4
Show file tree
Hide file tree
Showing 122 changed files with 3,242 additions and 992 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,7 @@ CHANGELOG*
/x-pack/metricbeat/module/iis @elastic/obs-infraobs-integrations
/x-pack/metricbeat/module/istio/ @elastic/obs-cloudnative-monitoring
/x-pack/metricbeat/module/mssql @elastic/obs-infraobs-integrations
/x-pack/metricbeat/module/openai @elastic/obs-infraobs-integrations
/x-pack/metricbeat/module/oracle @elastic/obs-infraobs-integrations
/x-pack/metricbeat/module/panw @elastic/obs-infraobs-integrations
/x-pack/metricbeat/module/prometheus/ @elastic/obs-cloudnative-monitoring
Expand Down
1 change: 1 addition & 0 deletions CHANGELOG-developer.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ The list below covers the major changes between 7.0.0-rc2 and main only.
- AWS CloudWatch Metrics record previous endTime to use for next collection period and change log.logger from cloudwatch to aws.cloudwatch. {pull}40870[40870]
- Fix flaky test in cel and httpjson inputs of filebeat. {issue}40503[40503] {pull}41358[41358]
- Fix documentation and implementation of raw message handling in Filebeat http_endpoint by removing it. {pull}41498[41498]
- Fix flaky test in filebeat Okta entity analytics provider. {issue}42059[42059] {pull}42123[42123]

==== Added

Expand Down
6 changes: 5 additions & 1 deletion CHANGELOG.next.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,9 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]
- Redact authorization headers in HTTPJSON debug logs. {pull}41920[41920]
- Further rate limiting fix in the Okta provider of the Entity Analytics input. {issue}40106[40106] {pull}41977[41977]
- Fix streaming input handling of invalid or empty websocket messages. {pull}42036[42036]
- Fix awss3 document ID construction when using the CSV decoder. {pull}42019[42019]
- The `_id` generation process for S3 events has been updated to incorporate the LastModified field. This enhancement ensures that the `_id` is unique. {pull}42078[42078]
- Fix Netflow Template Sharing configuration handling. {pull}42080[42080]

*Heartbeat*

Expand Down Expand Up @@ -226,7 +229,6 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]
- Don't skip first bucket value in GCP metrics metricset for distribution type metrics {pull}41822[41822]
- Fixed `creation_date` scientific notation output in the `elasticsearch.index` metricset. {pull}42053[42053]


*Osquerybeat*


Expand Down Expand Up @@ -416,7 +418,9 @@ https://github.com/elastic/beats/compare/v8.8.1\...main[Check the HEAD diff]
- Add support for region/zone for Vertex AI service in GCP module {pull}41551[41551]
- Add support for location label as an optional configuration parameter in GCP metrics metricset. {issue}41550[41550] {pull}41626[41626]
- Add support for podman metrics in docker module. {pull}41889[41889]
- Collect .NET CLR (IIS) Memory, Exceptions and LocksAndThreads metrics {pull}41929[41929]
- Added `tier_preference`, `creation_date` and `version` fields to the `elasticsearch.index` metricset. {pull}41944[41944]
- Add new OpenAI (`openai`) module for tracking usage data. {pull}41516[41516]
- Add `use_performance_counters` to collect CPU metrics using performance counters on Windows for `system/cpu` and `system/core` {pull}41965[41965]

*Metricbeat*
Expand Down
8 changes: 4 additions & 4 deletions NOTICE.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13617,11 +13617,11 @@ Contents of probable licence file $GOMODCACHE/github.com/elastic/elastic-agent-l

--------------------------------------------------------------------------------
Dependency : github.com/elastic/elastic-agent-system-metrics
Version: v0.11.5
Version: v0.11.6
Licence type (autodetected): Apache-2.0
--------------------------------------------------------------------------------

Contents of probable licence file $GOMODCACHE/github.com/elastic/[email protected].5/LICENSE.txt:
Contents of probable licence file $GOMODCACHE/github.com/elastic/[email protected].6/LICENSE.txt:

Apache License
Version 2.0, January 2004
Expand Down Expand Up @@ -26608,11 +26608,11 @@ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

--------------------------------------------------------------------------------
Dependency : golang.org/x/net
Version: v0.30.0
Version: v0.33.0
Licence type (autodetected): BSD-3-Clause
--------------------------------------------------------------------------------

Contents of probable licence file $GOMODCACHE/golang.org/x/net@v0.30.0/LICENSE:
Contents of probable licence file $GOMODCACHE/golang.org/x/net@v0.33.0/LICENSE:

Copyright 2009 The Go Authors.

Expand Down
52 changes: 42 additions & 10 deletions filebeat/_meta/config/filebeat.inputs.reference.yml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -770,25 +770,57 @@ filebeat.inputs:
# Journald input is experimental.
#- type: journald
#enabled: true
#id: service-foo

# You may wish to have separate inputs for each service. You can use
# include_matches.or to specify a list of filter expressions that are
# applied as a logical OR. You may specify filter
#include_matches.match:
#- _SYSTEMD_UNIT=foo.service
# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
id: my-journald-id

# List of syslog identifiers
#syslog_identifiers: ["audit"]
# Specify paths to read from custom journal files.
# Leave it unset to read the system's journal
# Glob based paths.
#paths:
#- /var/log/custom.journal

# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head

# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h

# Collect events from the service and messages about the service,
# including coredumps.
#units: ["docker.service"]
#units:
#- docker.service

# List of syslog identifiers
#syslog_identifiers: ["audit"]

# The list of transports (_TRANSPORT field of journald entries)
#transports: ["audit"]

# Parsers are also supported, here is an example of the multiline
# Filter logs by facilities, they must be specified using their numeric code.
#facilities:
#- 1
#- 2

# You may wish to have separate inputs for each service. You can use
# include_matches.or to specify a list of filter expressions that are
# applied as a logical OR.
#include_matches.match:
#- _SYSTEMD_UNIT=foo.service

# Uses the original hostname of the entry instead of the one
# from the host running jounrald
#save_remote_hostname: false

# Parsers are also supported, the possible parsers are:
# container, include_message, multiline, ndjson, syslog.
# Here is an example of the multiline
# parser.
#parsers:
#- multiline:
Expand Down
23 changes: 23 additions & 0 deletions filebeat/_meta/config/filebeat.inputs.yml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,26 @@ filebeat.inputs:
#fields:
# level: debug
# review: 1

# journald is an input for collecting logs from Journald
- type: journald

# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
id: my-journald-id

# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head

# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h

# Collect events from the service and messages about the service,
# including coredumps.
#units:
#- docker.service
8 changes: 4 additions & 4 deletions filebeat/channel/outlet.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@
package channel

import (
"sync/atomic"

"github.com/elastic/beats/v7/libbeat/beat"
"github.com/elastic/beats/v7/libbeat/common/atomic"
)

type outlet struct {
Expand All @@ -31,15 +32,14 @@ type outlet struct {
func newOutlet(client beat.Client) *outlet {
o := &outlet{
client: client,
isOpen: atomic.MakeBool(true),
done: make(chan struct{}),
}
o.isOpen.Store(true)
return o
}

func (o *outlet) Close() error {
isOpen := o.isOpen.Swap(false)
if isOpen {
if o.isOpen.Swap(false) {
close(o.done)
return o.client.Close()
}
Expand Down
52 changes: 42 additions & 10 deletions filebeat/filebeat.reference.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1183,25 +1183,57 @@ filebeat.inputs:
# Journald input is experimental.
#- type: journald
#enabled: true
#id: service-foo

# You may wish to have separate inputs for each service. You can use
# include_matches.or to specify a list of filter expressions that are
# applied as a logical OR. You may specify filter
#include_matches.match:
#- _SYSTEMD_UNIT=foo.service
# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
id: my-journald-id

# List of syslog identifiers
#syslog_identifiers: ["audit"]
# Specify paths to read from custom journal files.
# Leave it unset to read the system's journal
# Glob based paths.
#paths:
#- /var/log/custom.journal

# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head

# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h

# Collect events from the service and messages about the service,
# including coredumps.
#units: ["docker.service"]
#units:
#- docker.service

# List of syslog identifiers
#syslog_identifiers: ["audit"]

# The list of transports (_TRANSPORT field of journald entries)
#transports: ["audit"]

# Parsers are also supported, here is an example of the multiline
# Filter logs by facilities, they must be specified using their numeric code.
#facilities:
#- 1
#- 2

# You may wish to have separate inputs for each service. You can use
# include_matches.or to specify a list of filter expressions that are
# applied as a logical OR.
#include_matches.match:
#- _SYSTEMD_UNIT=foo.service

# Uses the original hostname of the entry instead of the one
# from the host running jounrald
#save_remote_hostname: false

# Parsers are also supported, the possible parsers are:
# container, include_message, multiline, ndjson, syslog.
# Here is an example of the multiline
# parser.
#parsers:
#- multiline:
Expand Down
23 changes: 23 additions & 0 deletions filebeat/filebeat.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,29 @@ filebeat.inputs:
# level: debug
# review: 1

# journald is an input for collecting logs from Journald
- type: journald

# Unique ID among all inputs, if the ID changes, all entries
# will be re-ingested
id: my-journald-id

# The position to start reading from the journal, valid options are:
# - head: Starts reading at the beginning of the journal.
# - tail: Starts reading at the end of the journal.
# This means that no events will be sent until a new message is written.
# - since: Use also the `since` option to determine when to start reading from.
#seek: head

# A time offset from the current time to start reading from.
# To use since, seek option must be set to since.
#since: -24h

# Collect events from the service and messages about the service,
# including coredumps.
#units:
#- docker.service

# ============================== Filebeat modules ==============================

filebeat.config.modules:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ import (
"fmt"
"strings"
"sync"
"sync/atomic"
"testing"
"time"

Expand All @@ -32,7 +33,6 @@ import (
"github.com/elastic/beats/v7/filebeat/input/filestream/internal/task"
input "github.com/elastic/beats/v7/filebeat/input/v2"
"github.com/elastic/beats/v7/libbeat/beat"
"github.com/elastic/beats/v7/libbeat/common/atomic"
"github.com/elastic/beats/v7/libbeat/tests/resources"
"github.com/elastic/elastic-agent-libs/logp"
)
Expand Down Expand Up @@ -128,7 +128,7 @@ func TestDefaultHarvesterGroup(t *testing.T) {

t.Run("assert a harvester is only started if harvester limit haven't been reached", func(t *testing.T) {
var wg sync.WaitGroup
var harvesterRunningCount atomic.Int
var harvesterRunningCount atomic.Int64
var harvester1Finished, harvester2Finished atomic.Bool
done1, done2 := make(chan struct{}), make(chan struct{})

Expand Down
8 changes: 4 additions & 4 deletions filebeat/input/filestream/internal/input-logfile/store.go
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ import (
"fmt"
"strings"
"sync"
"sync/atomic"
"time"

"github.com/elastic/beats/v7/libbeat/common/atomic"
"github.com/elastic/beats/v7/libbeat/common/cleanup"
"github.com/elastic/beats/v7/libbeat/common/transform/typeconv"
"github.com/elastic/beats/v7/libbeat/statestore"
Expand Down Expand Up @@ -438,14 +438,14 @@ func (r *resource) isDeleted() bool {
// Retain is used to indicate that 'resource' gets an additional 'owner'.
// Owners of an resource can be active inputs or pending update operations
// not yet written to disk.
func (r *resource) Retain() { r.pending.Inc() }
func (r *resource) Retain() { r.pending.Add(1) }

// Release reduced the owner ship counter of the resource.
func (r *resource) Release() { r.pending.Dec() }
func (r *resource) Release() { r.pending.Add(^uint64(0)) }

// UpdatesReleaseN is used to release ownership of N pending update operations.
func (r *resource) UpdatesReleaseN(n uint) {
r.pending.Sub(uint64(n))
r.pending.Add(^uint64(n - 1))
}

// Finished returns true if the resource is not in use and if there are no pending updates
Expand Down
Loading

0 comments on commit d5981c4

Please sign in to comment.