You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i created conda env llm-cpp
and then
conda activate llm-cpp
init-ollama.bat
$env:no_proxy = "localhost,127.0.0.1"; $env:ZES_ENABLE_SYSMAN = "1"; $env:OLLAMA_NUM_GPU = "999" ;$env:SYCL_CACHE_PERSISTENT = "1"; $env:SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS = "1"
and then
.\ollama.exe serve
as we can see, it doesn't match the situation in the link
May I ask why you think ollama isn't running on a GPU. Have you tested running a model with ollama? If you have more questions, please provide the complete log from ollama server side.
May I ask why you think ollama isn't running on a GPU. Have you tested running a model with ollama? If you have more questions, please provide the complete log from ollama server side.请问您为什么认为 ollama 不在 GPU 上运行。您是否测试过使用 ollama 运行模型?如果您还有其他问题,请提供来自 ollama 服务器端的完整日志。
Because I found that the log in the picture above says OLLAMA_INTEL_GPU:FALSE.
i created conda env llm-cpp
and then
conda activate llm-cpp
init-ollama.bat
$env:no_proxy = "localhost,127.0.0.1"; $env:ZES_ENABLE_SYSMAN = "1"; $env:OLLAMA_NUM_GPU = "999" ;$env:SYCL_CACHE_PERSISTENT = "1"; $env:SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS = "1"
and then
.\ollama.exe serve
as we can see, it doesn't match the situation in the link
CPU: intel ultra7 258v
GPU: intergated Arc 140v
link:
https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md
The text was updated successfully, but these errors were encountered: