-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Number of measurements in InfluxDB is less then in result.json #31
Comments
Thanks for reporting this and the inclusion of the queries. 🙇 Did you have the following line in the Also have you tried with bigger time range in influxdb as it might turn out you just "cropped" the last few requests? |
Hello! I encountered the same issue, and wrote about it in the Grafana Community. The reason in my case was the difference in time precision between Go and Windows, as I was running the tests on Windows. How do you run your tests? |
Hello,
I am running my evaluation tests on Mac book pro with 50 VU’s. Between each http request are sleep timers for 300ms and between each iteration a sleep timer of 500 ms
Some of http requests were failed, due to timeouts and one check should always fail. So there is nothing special there. Today I will run the same test with a sleep delay of 500 ms between http requests and 1.5 sec between iterations.
Will provide the results after test
Kind regards
A. Ziganow
…___________________________________________________________
[image002.png]
SimplyTest GmbH
Geschäftsführer: Alexander Heimlich, Waldemar Siebert
Sitz der Gesellschaft: Nürnberg
Registergericht: Amtsgericht Nürnberg
Registernummer: HRB 35089
Telefon: +49 (0)911 373 96 700
www.simplytest.de
[image004.png][image006.png]
Am 22.04.2024 um 16:45 schrieb Marija Mitevska ***@***.***>:
Hello! I encountered the same issue, and wrote about it in the Grafana Community<https://community.grafana.com/t/loss-of-metrics-when-output-to-influxdb/116143>. The reason in my case was the difference in time precision between Go and Windows, as I was running the tests on Windows. How do you run your tests?
—
Reply to this email directly, view it on GitHub<#31 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADFR4ORBWHTL4ZOICXIM64LY6UO7XAVCNFSM6AAAAABGSYJCWCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRZG4ZDGNZZGQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hallo,
I have defused my test. Now with just 5 VU and only one http request and run time of 3 stages (see stages definitions below) min and a sleep delay of 500ms between iteration
The problem is still there influxDB is missing measure points.
Influxdb. : 1704
K6 : http_reqs......................: 1717 8.799956/s
stages: [
{duration: '10s', target: 5},
{duration: '3m', target: 5},
{duration: '5s', target: 0}
],
From my point of view, there is a problem with concurrent writes, because if I am running the same script with just 1 VU there are always the same number of records in influxes and json file.
If I am running with more then 1 VU the problem occurs sometime and the more VUs run concurrently, the more data points fail to be written to InfluxDB.
It is also possible that the records in influxes are overwritten due to uniq constraint violation ( unique queue is build with measurement + tag-set + timestamp)
After my last test it seems to be a measurepoint problem and unique constraint.
I tested again with 50 vu but now with a global tag {test_vu: __VU} and without sleeps between iteration and after 4 min run I got the same number of records in influxdb and json file 68004 with 242.86 reqs/sec.
…___________________________________________________________
[image002.png]
SimplyTest GmbH
Geschäftsführer: Alexander Heimlich, Waldemar Siebert
Sitz der Gesellschaft: Nürnberg
Registergericht: Amtsgericht Nürnberg
Registernummer: HRB 35089
Telefon: +49 (0)911 373 96 700
www.simplytest.de
[image004.png][image006.png]
Am 22.04.2024 um 16:45 schrieb Marija Mitevska ***@***.***>:
Hello! I encountered the same issue, and wrote about it in the Grafana Community<https://community.grafana.com/t/loss-of-metrics-when-output-to-influxdb/116143>. The reason in my case was the difference in time precision between Go and Windows, as I was running the tests on Windows. How do you run your tests?
—
Reply to this email directly, view it on GitHub<#31 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADFR4ORBWHTL4ZOICXIM64LY6UO7XAVCNFSM6AAAAABGSYJCWCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRZG4ZDGNZZGQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
If you are correct than this is the same problem as @marijamitevska-rldatix - your timestamp for a given sample matches exactly another one(among the others). AFAIK the only solution for this will be to aggregate samples, and AFAIK this is not planned to be worked on for this extension. Also AFAIK it won't be backwards compatible so it likely will need to either bump a major version or be a separate project. cc @codebien |
Hi,
I'am wondering why the number of record during the testrun differ from the report summary of k6 itself and also differ from json result output. There are always less records in InfluxDB.
My Last run:
K6 excerpt
http_reqs......................: 29500 132.648204/s
JQ Query:
jq '. | select(.type=="Point" and .metric == "http_reqs" and .data.tags.testrun == "20240422145315" ) | .data.value' script_results.json | jq -s "length"
29500
Influx Flux Query:
from(bucket: "k6")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "http_reqs")
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["testrun"] == "20240422145315")
|> group()
|> count()
|> yield(name: "mean")
29268 ---> missing 232 records
Any idea what is going wrong here?
Kind Regards
Alexey
The text was updated successfully, but these errors were encountered: