Enhancement: Improve ROCm performance on various quants (benchmarks included) #11931
Open
4 tasks done
Labels
enhancement
New feature or request
Prerequisites
Feature Description
This started with benchmarks showing some variability in model performance running different quants on CUBLAS / MMQ on different hardware so... in order to make it more clear where improvements are needed benchmarks!
Git revision b4735
Relevant subset of results of
./bin/test-backend-ops perf -o MUL_MAT
With bar graphs (with and without MI100 since its alot faster than the others) MI100 results provided by @IMbackK
MI60-MI25-MI100_MAT_MUL.xlsx
Anyone running Vega20 (Radeon VII,Radeon Pro Vega II Duo, MI50 or MI60) should probably use Q4_0 or Q4_1 quants if they can as it is almost twice much compute available. Avoid Q2 as it is very slow.
Vega 10 MMQ has reduced performance for K quants avoid. And slightly better compute performance for Q4_0 and Q4_1.
MI100 sees 48-50T/f on most quants, but it should see higher performance in several of these. Currently only f16 is faster but it is probably under performing still. Peak theoretical fp16 on MI100 is 8x it's FP32 performance.
Motivation
Many inexpensive large vram GPUs are leaving performance on the table.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: