You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For background, this was an intentional decision to not support tolerances from the beginning. Using non-zero tolerances makes it easy to get false positives and let visual regressions slip through. Small changes (not caught by the tests) can build up, and whoever makes the final change that causes the tests to fail has to address all of the built up problems.
macOS Catalina changed the rendering process on the simulator from software rendering to GPU-based rendering (with Metal). Unfortunately this introduced some variance in how snapshot images are generated across different machines (see uber/ios-snapshot-test-case#109), which is causing snapshot tests to fail. Currently the only known successful workaround is to increase the pixel matching tolerance.
For the snapshots of individual frames (SnapshotVerify(animation:on:at:) and associated methods), it's quite straightforward to implement tolerances, since it calls through to FBSnapshotVerifyView. There's still an open question of what exactly supporting tolerances with the animated snapshots should look like though. For single frames, there are two tolerance parameters: the perPixelTolerance, which measures how different an individual pixel can be without the framework considering it to have changed, and the overallTolerance, which measures what portion of pixels in the image can be different (based on the per-pixel determination) without the framework considering the image to have changed. With the animated snapshot images, there's a third dimension to it - the number of frames in the image that have changed. I could see this coming together in a few different ways, for example:
overallTolerance applies equally to each frame in the image. If any of the frame surpass this tolerance, the comparison fails.
overallTolerance applies to the image as a whole. In other words, instead of comparing the image frame-by-frame, we add up the number of different pixels across all frames and divide that by the total number of pixels in all frames.
overallTolerance gets split into two different measurements, something like frameTolerance and sequenceTolerance. These would measure the portion of differing pixels in each frame and the number of differing frames in the sequence respectively.
We should do some experimentation to optimize for the approach that resolves the GPU-based rendering problem while catching as minor regressions as possible. It sounds like adjusting the perPixelTolerance is the main solution for addressing the GPU problem, so we might be able to only introduce that one for now.
Animation snapshot tests do not currently support per-pixel or overall tolerance, which are supported by the underlying
FBSnapshotTestCase
.The text was updated successfully, but these errors were encountered: