Skip to content

Commit

Permalink
Update ollama v0.5.1 document (#12699)
Browse files Browse the repository at this point in the history
* Update ollama document version and known issue
  • Loading branch information
sgwhat authored Jan 10, 2025
1 parent db9db51 commit e2d58f7
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 4 deletions.
10 changes: 8 additions & 2 deletions docs/mddocs/Quickstart/ollama_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
> For installation on Intel Arc B-Series GPU (such as **B580**), please refer to this [guide](./bmg_quickstart.md).
> [!NOTE]
> Our current version is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama.
> Our current version is consistent with [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) of ollama.
>
> `ipex-llm[cpp]==2.2.0b20241204` is consistent with [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) of ollama.
> `ipex-llm[cpp]==2.2.0b20250105` is consistent with [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) of ollama.
See the demo of running LLaMA2-7B on Intel Arc GPU below.

Expand Down Expand Up @@ -237,3 +237,9 @@ When executing `ollama serve` and `ollama run <model_name>`, if you meet `./olla

1. if you have installed conda and if you are in the right conda environment which has pip installed oneapi dependencies on Windows
2. if you have have executed `source /opt/intel/oneapi/setvars.sh` before running both `./ollama serve` and `./ollama run <model_name>` on Linux

#### 10. `ollama serve` has no output or response
When you start `ollama serve` and execute `ollama run <model_name>`, but `ollama serve` has no response. This may be due to multiple ollama processes running on your device. Please run commands as below:

1. On Linux, you may run `systemctl stop ollama` to stop all ollama processes, and then rerun `ollama serve` in your current directory.
2. On Windows, you may `set OLLAMA_HOST=0.0.0.0` to ensure that the ollama commands run on the current `ollama serve`.
10 changes: 8 additions & 2 deletions docs/mddocs/Quickstart/ollama_quickstart.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
> 如果是在 Intel Arc B 系列 GPU 上安装(例,**B580**),请参阅本[指南](./bmg_quickstart.md)
> [!NOTE]
> `ipex-llm[cpp]` 的最新版本与官方 ollama 的 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 版本保持一致。
> `ipex-llm[cpp]` 的最新版本与官方 ollama 的 [v0.5.1](https://github.com/ollama/ollama/releases/tag/v0.5.1) 版本保持一致。
>
> `ipex-llm[cpp]==2.2.0b20241204` 与官方 ollama 的 [v0.3.6](https://github.com/ollama/ollama/releases/tag/v0.3.6) 版本保持一致。
> `ipex-llm[cpp]==2.2.0b20250105` 与官方 ollama 的 [v0.4.6](https://github.com/ollama/ollama/releases/tag/v0.4.6) 版本保持一致。
以下是在 Intel Arc GPU 上运行 LLaMA2-7B 的 DEMO 演示。

Expand Down Expand Up @@ -232,3 +232,9 @@ Ollama 默认每 5 分钟从 GPU 内存卸载一次模型。针对 ollama 的最

1. Windows:是否已经安装了 conda 并激活了正确的 conda 环境,环境中是否已经使用 pip 安装了 oneAPI 依赖项
2. Linux:是否已经在运行 `./ollama serve``./ollama run <model_name>` 命令前都执行了 `source /opt/intel/oneapi/setvars.sh`。执行此 source 命令只在当前会话有效。

#### 10. ollama serve 没有输出或响应
当你启动 `ollama serve` 并运行 `ollama run <model_name>` 时,`ollama serve` 没有响应。这可能是由于你的设备上存在多个 ollama 进程导致的。请按照以下命令操作:

在 Linux 上,你可以运行 `systemctl stop ollama` 来停止所有的 ollama 进程,然后在当前目录重新执行 `ollama serve`
在 Windows 上,你可以运行 `set OLLAMA_HOST=0.0.0.0` 以确保 ollama 命令通过当前的 `ollama serve` 上运行。

0 comments on commit e2d58f7

Please sign in to comment.