Skip to content

Issues: ggml-org/llama.cpp

examples : add configuration presets
#10932 opened Dec 21, 2024 by ggerganov
Open 3
changelog : libllama API
#9289 opened Sep 3, 2024 by ggerganov
Open 5
changelog : llama-server REST API
#9291 opened Sep 3, 2024 by ggerganov
Open 12
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

bamba
#11955 opened Feb 19, 2025 by werruww
Misc. bug: hipGraph causes a crash in hipGraphDestroy AMD GPU Issues specific to AMD GPUs
#11949 opened Feb 18, 2025 by IMbackK
Add option to build CUDA backend without Flash attention enhancement New feature or request
#11946 opened Feb 18, 2025 by slaren
Feature Request: 推理minicpmv时,encoding_image_with_clip耗时很久 enhancement New feature or request
#11941 opened Feb 18, 2025 by EnzhiZhou
4 tasks done
Enhancement: Improve ROCm performance on various quants (benchmarks included) enhancement New feature or request
#11931 opened Feb 17, 2025 by cb88
4 tasks done
Compile bug: bug-unconfirmed
#11930 opened Feb 17, 2025 by sraouser
Feature Request: Use direct_io for model load and inference enhancement New feature or request
#11912 opened Feb 16, 2025 by jagusztinl
4 tasks done
Feature Request: APIkey enhancement New feature or request
#11874 opened Feb 14, 2025 by gsm1258
4 tasks done
ProTip! Find all open issues with in progress development work with linked:pr.