Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update test harness with metadata assertions #1467 #289

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

chrfwow
Copy link

@chrfwow chrfwow commented Jan 13, 2025

This PR

Adds gherkin test to verify that flag evaluations provide metadata

Related Issues

Part of #1467 (open-feature/flagd#1467)

Follow-up Tasks

Implement steps of the gherkin file in the repositories, and add test data according to the issue

@chrfwow chrfwow requested a review from toddbaert as a code owner January 13, 2025 12:19
@chrfwow chrfwow force-pushed the Update-test-harness-#1467 branch from feec001 to ff27af0 Compare January 13, 2025 12:21
Comment on lines +8 to +9
Given a Boolean-flag with key "metadata-flag" and a default value "true"
When the flag was evaluated with details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Given a Boolean-flag with key "metadata-flag" and a default value "true"
When the flag was evaluated with details
Given a Boolean-flag with key "metadata-flag" is evaluated with default value "false"

Let's use the same sentence structure as the other steps, this way the same bindings will "just work" and less code needs to be implemented.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would specifically separate those two. Sometimes, you need to flag information without an evaluation. Hence, it is only a given state. But the when is the dedicated action. eg when it was modified. With this, we separate data and action. We already did the same thing within the testbed.

Comment on lines 15 to 25
Scenario Outline: Returns no metadata
Given a <type>-flag with key "<key>" and a default value "<default_value>"
When the flag was evaluated with details
Then the resolved metadata is empty

Examples: Flags
| key | type | default_value |
| boolean-flag | Boolean | true |
| integer-flag | Integer | 23 |
| float-flag | Float | 2.3 |
| string-flag | String | value |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure these empty-metadata tests are worth it TBH. cc @aepfli what do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this was more a little exploration showcasing from my side, i still think that a negative case, to ensure we are not polluting the response makes sense

Comment on lines 10 to 13
Then the resolved metadata value "string" with type "String" should be "1.0.2"
And the resolved metadata value "integer" with type "Integer" should be "2"
And the resolved metadata value "double" with type "Double" should be "0.1"
And the resolved metadata value "boolean" with type "Boolean" should be "true"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could probably use a table for this, with columns for the metadata key, type, and value.

@toddbaert
Copy link
Member

toddbaert commented Jan 13, 2025

@aepfli do you think it makes more sense to simply add metadata to the existing gherkin suite? We already have error flags there - we could add metadata to some of the existing flags and error flags, and them update the in-memory providers to support this.

@aepfli
Copy link
Member

aepfli commented Jan 14, 2025

@aepfli do you think it makes more sense to simply add metadata to the existing gherkin suite? We already have error flags there - we could add metadata to some of the existing flags and error flags, and them update the in-memory providers to support this.

I am not sure if integration into the existing suite is suitable. We could enhance the existing ones, with a description of what the purpose of those flags is. But generally i prefer a flagname which already tells me what i can expect from this flag.

This also brings me to the next topic. As we are currently generating the data in each sdk on our own, it might be worth to add a json here with the testdata, which we load into the inmemory provider, wdt?

| key | metadata_type | value |
| string | String | 1.0.2 |
| integer | Integer | 2 |
| double | Double | 0.1 |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metadata type should be Float

@chrfwow chrfwow requested a review from guidobrei January 15, 2025 09:47
@toddbaert
Copy link
Member

This also brings me to the next topic. As we are currently generating the data in each sdk on our own, it might be worth to add a json here with the testdata, which we load into the inmemory provider, wdt?

Most in-memory providers can't actually load JSON, so I'm not sure how valuable that would be.

@aepfli
Copy link
Member

aepfli commented Jan 15, 2025

This also brings me to the next topic. As we are currently generating the data in each sdk on our own, it might be worth to add a json here with the testdata, which we load into the inmemory provider, wdt?

Most in-memory providers can't actually load JSON, so I'm not sure how valuable that would be.

i would not load them automagically, but within the tests, we can load the json files, and provide the data as parameter to the inmemory provider. We would centralize the test flag data, and with new gherkin files, which only add new data for experimentation everything should work ootb.

data = loadJsonData(jsonfile)
provider = InMemoryProvider(data) // simple prosa

Co-authored-by: Simon Schrottner <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
@toddbaert
Copy link
Member

This also brings me to the next topic. As we are currently generating the data in each sdk on our own, it might be worth to add a json here with the testdata, which we load into the inmemory provider, wdt?

Most in-memory providers can't actually load JSON, so I'm not sure how valuable that would be.

i would not load them automagically, but within the tests, we can load the json files, and provide the data as parameter to the inmemory provider. We would centralize the test flag data, and with new gherkin files, which only add new data for experimentation everything should work ootb.

data = loadJsonData(jsonfile)
provider = InMemoryProvider(data) // simple prosa

Oh, so all implementations would write a simple function loadJsonData or similar which would do the conversion? I guess that's not a bad idea since it might be fairly future-proof when new things are added.

@toddbaert toddbaert changed the title feat: Update test harness (add assertions) #1467 feat: Update test harness with metadata assertions #1467 Jan 15, 2025
@aepfli
Copy link
Member

aepfli commented Jan 15, 2025

Oh, so all implementations would write a simple function loadJsonData or similar which would do the conversion? I guess that's not a bad idea since it might be fairly future-proof when new things are added.

excactly, and this way we ensure, everyone gets the same data afterall

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants