Replies: 1 comment 1 reply
-
Good summary over email Boban. I also like the ideal testing approach that Theo outlined. Specifically, our tests should have clear goals in coverage.
One benefit to only checking that outputs match is that we don't have to manage canonical results, which quickly become painful to maintain. Let's move forward with the following decisions:
I'll assume we're aligned, but please feel free to interject if not. @petrovicboban could you take the first pass at setting up the cross-language script? Regarding the actual test content, both the Python and R tests use the external Iris dataset. We can test the output of just a single predict call across Python and R (see links). You can DM directly if you run into any blockers. Regarding priority, this would make testing the Python package easier, but does not explicitly block release. We should ensure that other direct blockers are resolved first. [1] Boban suggested we separate Kanban boards into specific milestones. One future milestone I had in mind was landing the speed-up of training in C++. The first step of which could be writing unit tests for the training code then refactoring safely with the coverage from unit tests. |
Beta Was this translation helpful? Give feedback.
-
Moving from our email thread
This is summery of current state:
So, what we don’t have now is point 3 from Theo’s email – output comparing, and the question is where/when to do it.
Beta Was this translation helpful? Give feedback.
All reactions