You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@slaren Honestly, I think Flash Attention should be an optional feature in ggml since it doesn't introduce significant performance improvements, and the binary size has increased considerably—not to mention the compilation time, which, even though I only compile it for my GPU architecture, still takes 20 minutes on an i5-12400. It is not related to this PR, but it would be good to take it into account.
I can get that FA can be built optionally to reduce build time. But saying 'it doesn't introduce significant performance improvements' is a bit misleading. On my 4090, I got 47 T/S with FA on and 37 T/S off. SD generation also got a speed up with FA.
Originally posted by @FSSRepo in #11867 (comment)
The text was updated successfully, but these errors were encountered: