-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splunk Observability Scaler - Initial PR #5910
Conversation
@@ -114,7 +114,7 @@ e2e-test-clean-crds: ## Delete all scaled objects and jobs across all namespaces | |||
|
|||
.PHONY: e2e-test-clean | |||
e2e-test-clean: get-cluster-context ## Delete all namespaces labeled with type=e2e | |||
kubectl delete ns -l type=e2e | |||
microk8s kubectl delete ns -l type=e2e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that you need to remove all the changes from this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, thanks for pointing that out. I manually added "microk8s" to be able to build the project with my MicroK8s cluster that I have set up.
@@ -6,5 +6,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1 | |||
kind: Kustomization | |||
images: | |||
- name: ghcr.io/kedacore/keda | |||
newName: ghcr.io/kedacore/keda | |||
newName: docker.io/sschimpersplunk/keda |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should point to official container image.
@@ -10,5 +10,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1 | |||
kind: Kustomization | |||
images: | |||
- name: ghcr.io/kedacore/keda-metrics-apiserver | |||
newName: ghcr.io/kedacore/keda-metrics-apiserver |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should point to official container image.
@@ -7,5 +7,5 @@ apiVersion: kustomize.config.k8s.io/v1beta1 | |||
kind: Kustomization | |||
images: | |||
- name: ghcr.io/kedacore/keda-admission-webhooks | |||
newName: ghcr.io/kedacore/keda-admission-webhooks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should point to official container image.
@@ -77,7 +77,7 @@ require ( | |||
github.com/segmentio/kafka-go/sasl/aws_msk_iam_v2 v0.1.0 | |||
github.com/spf13/cast v1.6.0 | |||
github.com/spf13/pflag v1.0.5 | |||
github.com/stretchr/testify v1.8.4 | |||
github.com/stretchr/testify v1.9.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid changes unrelated to your PR, please. Leave testify in original version :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood, will do.
Program: s.metadata.query, | ||
}) | ||
if err != nil { | ||
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should return zero value (0.0 in this case) when returning an error.
var duration time.Duration = 10000000000 // ten seconds in nano seconds | ||
|
||
comp, err := s.apiClient.Execute(context.Background(), &signalflow.ExecuteRequest{ | ||
Program: s.metadata.query, | ||
}) | ||
if err != nil { | ||
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err) | ||
} | ||
|
||
go func() { | ||
time.Sleep(duration) | ||
if err := comp.Stop(context.Background()); err != nil { | ||
s.logger.Info("Failed to stop computation") | ||
} | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should rather use WithTimeout()
:
var duration time.Duration = 10000000000 // ten seconds in nano seconds | |
comp, err := s.apiClient.Execute(context.Background(), &signalflow.ExecuteRequest{ | |
Program: s.metadata.query, | |
}) | |
if err != nil { | |
return -1, fmt.Errorf("error: could not execute signalflow query: %w", err) | |
} | |
go func() { | |
time.Sleep(duration) | |
if err := comp.Stop(context.Background()); err != nil { | |
s.logger.Info("Failed to stop computation") | |
} | |
}() | |
var duration time.Duration = 10000000000 // ten seconds in nano seconds | |
xCtx, cancel := context.WithTimeout(ctx, duration) | |
defer cancel() | |
comp, err := s.apiClient.Execute(xCtx, &signalflow.ExecuteRequest{ | |
Program: s.metadata.query, | |
}) | |
if err != nil { | |
return 0.0, fmt.Errorf("error: could not execute signalflow query: %w", err) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that makes sense. I will update the function accordingly.
switch s.metadata.queryAggegrator { | ||
case "max": | ||
logMessage(s.logger, "Returning max value ", max) | ||
return max, nil | ||
case "min": | ||
logMessage(s.logger, "Returning min value ", min) | ||
return min, nil | ||
case "avg": | ||
avg := valueSum / float64(valueCount) | ||
logMessage(s.logger, "Returning avg value ", avg) | ||
return avg, nil | ||
default: | ||
return max, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can these aggregations be performed as part of the Splunk query? I think it would be more reliable to use server side logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will look into that.
for msg := range comp.Data() { | ||
s.logger.Info("getQueryResult -> msg: %+v\n", msg) | ||
if len(msg.Payloads) == 0 { | ||
s.logger.Info("getQueryResult -> No data retreived. Continuing") | ||
continue | ||
} | ||
for _, pl := range msg.Payloads { | ||
value, ok := pl.Value().(float64) | ||
if !ok { | ||
return -1, fmt.Errorf("error: could not convert Splunk Observability metric value to float64") | ||
} | ||
logMessage(s.logger, "Encountering value ", value) | ||
if value > max { | ||
max = value | ||
} | ||
if value < min { | ||
min = value | ||
} | ||
valueSum += value | ||
valueCount++ | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you decide to use Splunk-side aggregation, then this loop would be significantly simpler and you should expect one and only one value.
} | ||
metric := GenerateMetricInMili(metricName, num) | ||
|
||
logMessage(s.logger, "num", num) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
num
will never equal -1 here (-1 is returned from s.getQueryResult()
only when error is returned and such a scenario id handled in line 237).
Is this still active? |
Hi @JorTurFer, |
I hope so :) Sebastian was off for a while as far as I understand. |
There is other PR merged supporting Splunk -> #5905 |
Hi @JorTurFer, this seems to be independent of our work on a Scaler for Splunk Observability. |
Could we extend the current Splunk scaler instead of creating a totally new one? (IDK if it's possible or not tbh) |
Yes, I'd even think its for the better. |
Thanks ❤️ by chance, 2 different contributions related with spluk have been done at the same time (this and the other already merged) xD. We want to cut a release during the next week, if the PR is merged before Thursday, it'll be released next week (no rush at all, if it's not during this release, it'll be in the next one) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any updates on this please Shall we extend the current Splunk scaler?
Hi @zroubalik, |
Provide a description of what has been changed
Checklist
Fixes #
Relates to #