Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhanced Quantizer with QINT16 Support #2874

Merged
merged 1 commit into from
Jan 21, 2025

Conversation

djeong20
Copy link
Contributor

This PR enhances the quantizer by leveraging the output tensor along with scale factors for more accurate quantization.
Furthermore, it introduces support for the QINT16 data type, expanding our capabilities and improving performance.

Self-evaluation:

  1. Build test: [X]Passed [ ]Failed [ ]Skipped
  2. Run test: [X]Passed [ ]Failed [ ]Skipped

@djeong20 djeong20 force-pushed the update/quantizer/v2 branch 2 times, most recently from ce54643 to bc3f6af Compare January 15, 2025 01:24
@djeong20 djeong20 changed the title [Wait for #2866]Enhanced Quantizer with QINT16 Support [Wait for #2876]Enhanced Quantizer with QINT16 Support Jan 15, 2025
Copy link
Contributor

@baek2sm baek2sm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@djeong20 djeong20 force-pushed the update/quantizer/v2 branch from bc3f6af to 368a72b Compare January 20, 2025 07:02
@djeong20 djeong20 changed the title [Wait for #2876]Enhanced Quantizer with QINT16 Support Enhanced Quantizer with QINT16 Support Jan 20, 2025
Copy link
Collaborator

@dkjung dkjung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@skykongkong8 skykongkong8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the way you left some briefs about new test cases. Really helpful for reviewing!

This PR enhances the quantizer by leveraging the output tensor along with scale factors for more accurate quantization.
Furthermore, it introduces support for the QINT16 data type, expanding our capabilities and improving performance.

**Self-evaluation:**
1. Build test: [X]Passed [ ]Failed [ ]Skipped
2. Run test:   [X]Passed [ ]Failed [ ]Skipped

Signed-off-by: Donghyeon Jeong <[email protected]>
@djeong20 djeong20 force-pushed the update/quantizer/v2 branch from 368a72b to 1dc0047 Compare January 20, 2025 08:39
@jijoongmoon jijoongmoon merged commit b3a1f77 into nnstreamer:main Jan 21, 2025
17 checks passed
@djeong20 djeong20 deleted the update/quantizer/v2 branch January 22, 2025 02:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants