This document contains the detailed information about the CRDs logging-operator uses.
Available CRDs:
- loggings.logging.banzaicloud.io
- outputs.logging.banzaicloud.io
- flows.logging.banzaicloud.io
- clusteroutputs.logging.banzaicloud.io
- clusterflows.logging.banzaicloud.io
You can find example yamls here
Logging resource define a logging infrastructure for your cluster. You can define one or more logging
resource. This resource holds together a logging pipeline
. It is responsible to deploy fluentd
and fluent-bit
on the cluster. It declares a controlNamespace
and watchNamespaces
if applicable.
Note: The
logging
resources are referenced byloggingRef
. If you setup multiplelogging flow
you have to reference other objects to this field. This can happen if you want to run multiple fluentd with separated configuration.
You can install logging
resource via Helm chart with built-in TLS generation.
A logging pipeline
consist two type of resources.
Namespaced
resources:Flow
,Output
Global
resources:ClusterFlow
,ClusterOutput
The namespaced
resources only effective in their own namespace. Global
resources are operate cluster wide.
You can only create
ClusterFlow
andClusterOutput
in thecontrolNamespace
. It MUST be a protected namespace that only administrators have access.
Create a namespace for logging
kubectl create ns logging
logging
plain example
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
namespace: logging
spec:
fluentd: {}
fluentbit: {}
controlNamespace: logging
logging
with filtered namespaces
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-namespaced
namespace: logging
spec:
fluentd: {}
fluentbit: {}
controlNamespace: logging
watchNamespaces: ["prod", "test"]
Name | Type | Default | Description |
---|---|---|---|
loggingRef | string | "" | Reference name of the logging deployment |
flowConfigCheckDisabled | bool | False | Disable configuration check before deploy |
flowConfigOverride | string | "" | Use static configuration instead of generated config. |
fluentbit | FluentbitSpec | {} | Fluent-bit configurations |
fluentd | FluentdSpec | {} | Fluentd configurations |
watchNamespaces | []string | "" | Limit namespaces from where to read Flow and Output specs |
controlNamespace | string | "" | Control namespace that contains ClusterOutput and ClusterFlow resources |
enableRecreateWorkloadOnImmutableFieldChange | bool | false | Recreate workloads that cannot be updated, see details below |
enableRecreateWorkloadOnImmutableFieldChange
Not all fields can be updated on Kubernetes objects. This is especially true for Statefulsets and Daemonsets.
In case there is a change that requires recreating the fluentd/fluentbit workloads use this field
to move on but make sure to understand the consequences:
- As of fluentd, to avoid data loss, make sure to use a persistent volume for buffers
logging.spec.fluentd.
, which is the default, unless explicitly disabled or configured differently. - As of fluent-bit, to avoid duplicated logs, make sure to configure a hostPath volume for
the positions through
logging.spec.fluentbit.spec.positiondb
.
You can customize the fluentd
statefulset with the following parameters.
Name | Type | Default | Description |
---|---|---|---|
annotations | map[string]string | {} | Extra annotations to Kubernetes resource |
labels | map[string]string | {} | Extra labels for fluentd and it's related resources |
tls | TLS | {} | Configure TLS settings |
image | ImageSpec | {} | Fluentd image override |
fluentdPvcSpec | PersistentVolumeClaimSpec | {} | Deprecated, use BufferStorageVolume |
bufferStorageVolume | KubernetesStorage | nil | Fluentd PVC spec to mount persistent volume for Buffer |
disablePvc | bool | false | Disable PVC binding |
volumeModImage | ImageSpec | {} | Volume modifier image override |
configReloaderImage | ImageSpec | {} | Config reloader image override |
resources | ResourceRequirements | {} | Resource requirements and limits |
port | int | 24240 | Fluentd target port |
tolerations | Toleration | {} | Pod toleration |
nodeSelector | NodeSelector | {} | A node selector represents the union of the results of one or more label queries over a set of nodes |
metrics | Metrics | {} | Metrics defines the service monitor endpoints |
security | Security | {} | Security defines Fluentd, Fluentbit deployment security properties |
podPriorityClassName | string | "" | Name of a priority class to launch fluentd with |
fluentLogDestination | string | "null" | Send internal fluentd logs to stdout, or use "null" to omit them, see: https://docs.fluentd.org/deployment/logging#capture-fluentd-logs |
fluentOutLogrotate | FluentOutLogrotate | nil | Write to file instead of stdout and configure logrotate params. The operator configures it by default to write to /fluentd/log/out. https://docs.fluentd.org/deployment/logging#output-to-log-file |
scaling | Scaling | {replicas: 1} | Fluentd scaling configuration i.e replica count |
logging
with custom pvc volume for buffers
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd:
bufferStorageVolume:
pvc:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 40Gi
storageClassName: fast
volumeMode: Filesystem
fluentbit: {}
controlNamespace: logging
logging
with custom hostPath volume for buffers
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd:
disablePvc: true
bufferStorageVolume:
hostPath:
path: "" # leave it empty to automatically generate: /opt/logging-operator/default-logging-simple/default-logging-simple-fluentd-buffer
fluentbit: {}
controlNamespace: logging
Name | Type | Default | Description |
---|---|---|---|
annotations | map[string]string | {} | Extra annotations to Kubernetes resource |
labels | map[string]string | {} | Extra labels for fluent-bit and it's related resources |
tls | TLS | {} | Configure TLS settings |
image | ImageSpec | {} | Fluentd image override |
resources | ResourceRequirements | {} | Resource requirements and limits |
targetHost | string | Fluentd host | Hostname to send the logs forward |
targetPort | int | Fluentd port | Port to send the logs forward |
parser | string | cri | Change fluent-bit input parse configuration. Available parsers |
tolerations | Toleration | {} | Pod toleration |
metrics | Metrics | {} | Metrics defines the service monitor endpoints |
security | Security | {} | Security defines Fluentd, Fluentbit deployment security properties |
position_db | Deprecated, use positiondb instead | ||
positiondb | KubernetesStorage | nil | Add position db storage support. If nothing is configured an emptyDir volume will be used. |
inputTail | InputTail | {} | Preconfigured tailer for container logs on the host. Container runtime (containerd vs. docker) is automatically detected for convenience. |
filterKubernetes | FilterKubernetes | {} | Fluent Bit Kubernetes Filter allows to enrich your log files with Kubernetes metadata. |
bufferStorage | BufferStorage | Buffer Storage configures persistent buffer to avoid losing data in case of a failure | |
bufferStorageVolume | KubernetesStorage | nil | Volume definition for the Buffer Storage. If nothing is configured an emptydir volume will be used. |
customConfigSecret | string | "" | Custom secret to use as fluent-bit config. It must include all the config files necessary to run fluent-bit (fluent-bit.conf, parsers*.conf) |
podPriorityClassName | string | "" | Name of a priority class to launch fluentbit with |
s | |||
logging with custom fluent-bit annotations |
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd: {}
fluentbit:
annotations:
my-annotations/enable: true
controlNamespace: logging
logging
with hostPath volumes for buffers and positions
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd: {}
fluentbit:
bufferStorageVolume:
hostPath:
path: "" # leave it empty to automatically generate
positiondb:
hostPath:
path: "" # leave it empty to automatically generate
controlNamespace: logging
Override default images
Name | Type | Default | Description |
---|---|---|---|
repository | string | "" | Image repository |
tag | string | "" | Image tag |
pullPolicy | string | "" | Always, IfNotPresent, Never |
logging
with custom fluentd image
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd:
image:
repository: banzaicloud/fluentd
tag: v1.7.4-alpine-12
pullPolicy: IfNotPresent
fluentbit: {}
controlNamespace: logging
Define TLS certificate secret
Name | Type | Default | Description |
---|---|---|---|
enabled | string | "" | Image repository |
secretName | string | "" | Kubernetes secret that contains: tls.crt, tls.key, ca.crt |
sharedKey | string | "" | Shared secret for fluentd authentication |
logging
setup with TLS
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-tls
spec:
fluentd:
tls:
enabled: true
secretName: fluentd-tls
sharedKey: asdadas
fluentbit:
tls:
enabled: true
secretName: fluentbit-tls
sharedKey: asdadas
controlNamespace: logging
Define Kubernetes storage
Name | Type | Default | Description |
---|---|---|---|
host_path | deprecated, use hostPath instead | ||
hostPath | HostPathVolumeSource | - | Represents a host path mapped into a pod. If path is empty, it will automatically be set to "/opt/logging-operator//" |
emptyDir | EmptyDirVolumeSource | - | Represents an empty directory for a pod. |
pvc | [PersistentVolumeClaim](#Persistent Volume Claim) | - | A PersistentVolumeClaim (PVC) is a request for storage by a user. |
Name | Type | Default | Description |
---|---|---|---|
spec | PersistentVolumeClaimSpec | - | Spec defines the desired characteristics of a volume requested by a pod author. |
source | PersistentVolumeClaimVolumeSource | - | PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. |
The Persistent Volume Claim should be created with the given spec
and with the name
defined in the source
's claimName
.
Redirect fluentd's stdout to file and configure rotation settings.
This is important to avoid fluentd getting into a ripple effect when there is an error and the error message get's back to the system as a log message, which generates another error, etc...
Default settings configured by the operator
spec:
fluentd:
fluentOutLogrotate:
enabled: true
path: /fluentd/log/out
age: 10
size: 10485760
Disabling it and write to stdout (not recommended)
spec:
fluentd:
fluentOutLogrotate:
enabled: false
Outputs are the final stage for a logging flow
. You can define multiple outputs
and attach them to multiple flows
.
Note:
Flow
can be connected toOutput
andClusterOutput
butClusterFlow
is only attachable toClusterOutput
.
The supported Output
plugins are documented here
Name | Type | Default | Description |
---|---|---|---|
Output Definitions | Output | nil | Named output definitions |
loggingRef | string | "" | Specified logging resource reference to connect Output and ClusterOutput to |
output
s3 example
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: s3-output-sample
spec:
s3:
aws_key_id:
valueFrom:
secretKeyRef:
name: s3-secret
key: awsAccessKeyId
namespace: default
aws_sec_key:
valueFrom:
secretKeyRef:
name: s3-secret
key: awsSecretAccesKey
namespace: default
s3_bucket: example-logging-bucket
s3_region: eu-west-1
path: logs/${tag}/%Y/%m/%d/
buffer:
timekey: 1m
timekey_wait: 10s
timekey_use_utc: true
Flows define a logging flow
that defines the filters
and outputs
.
Flow
resources arenamespaced
, theselector
only selectPod
logs within namespace.ClusterFlow
select logs from ALL namespace.
Name | Type | Default | Description |
---|---|---|---|
selectors | map[string]string | {} | Kubernetes label selectors for the log. |
filters | []Filter | [] | List of applied filter. |
loggingRef | string | "" | Specified logging resource reference to connect FLow and ClusterFlow to |
outputRefs | []string | [] | List of Outputs or ClusterOutputs names |
flow
example with filters and output in the default
namespace
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: flow-sample
namespace: default
spec:
filters:
- parser:
remove_key_name_field: true
parse:
type: nginx
- tag_normaliser:
format: ${namespace_name}.${pod_name}.${container_name}
outputRefs:
- s3-output
selectors:
app: nginx