v1.11.0
Thanks to @obalcells and @andyrdt Llama-2 models should now have 1e-4
atol logit errors rather than 1e0
errors!
We also now force PyTorch2 to be >= 2.1.1 thanks to a PyTorch issue on MPS @jettjaniak pointed out. Thanks all!
What's Changed
- Fix Grokking Notebook by @ArthurConmy in #450
- Fixed current CI issues with accuracy failing for Pythia model by @bryce13950 in #451
- Fixing Llama2 numerical errors by @obalcells in #456
- Pin PyTorch2 to be at least 2.1.1 by @ArthurConmy in #457
New Contributors
- @obalcells made their first contribution in #456
Full Changelog: v1.10.0...v1.11.0