You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 6, 2023. It is now read-only.
Especially when testing Merino, it can be very helpful to have a known search term that has an expected output. For some providers, this is trivial (like Wikifruit). Others are possible, but harder (adm-rs). For some future providers it may be practically impossible to know what to expect from it without significant work (a remote API, for example).
To make this task easier, we should automate it. We should add a Merino API endpoint that produces a list of sample queries, and identifies the expected associated output. Some providers may not be able to provide any result, but many will.
Add an endpoint /api/v1/sample
The endpoint should take the same parameters as the suggest endpoint, except q.
The API should respond with a list of objects that specify a query, and the ID of the suggestion that would result. The results only need be relevant for the caller. That is, if location, client_variants, etc change then the results may not be valid.
This should be driven by a new method on the suggestion provider trait that returns a list of sampled suggestion responses (which may be a new struct type, or perhaps we re-use the existing one).
With this workflow in place, one of the testing workflows that could be enabled is to request a set of sample queries from the server, and then use all of those sampled queries to check for the expected result within a more complex environment like a Firefox automated test.
Especially when testing Merino, it can be very helpful to have a known search term that has an expected output. For some providers, this is trivial (like Wikifruit). Others are possible, but harder (adm-rs). For some future providers it may be practically impossible to know what to expect from it without significant work (a remote API, for example).
To make this task easier, we should automate it. We should add a Merino API endpoint that produces a list of sample queries, and identifies the expected associated output. Some providers may not be able to provide any result, but many will.
With this workflow in place, one of the testing workflows that could be enabled is to request a set of sample queries from the server, and then use all of those sampled queries to check for the expected result within a more complex environment like a Firefox automated test.
┆Issue is synchronized with this Jira Task
The text was updated successfully, but these errors were encountered: