You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environment, CPU architecture, OS, and Version: Linux gcgpu3 5.10.0-33-cloud-amd64 #1 SMP Debian 5.10.226-1 (2024-10-03) x86_64 GNU/Linux
Describe the bug
I would like to load a sentencetransformer model from a local file/directory, so I can use LocalAI in a machine without internet. To do this I downloaded a huggingfacemodel to a flash drive, and then transfer these files to the machine. Currently, I cannot do this as when setting the model parameter to where the local model is located (which is under /build/models), it fails with an error saying "path not found"
Also, when using an absolute path (eg: /build/models/path-to-model), it also fails when it is parsing the config file.
Then docker compose up and call the embedding endpoint
Expected behavior
Correct model path to be used.
Logs
When using path relative to the models path
localai-1 | 9:47PM ERR Server error error="failed to load model with internal loader: could not load model (no success): Unexpected err=ValueError('Path ./mxbai-embed-large-v1'), type(err)=<class 'ValueError'>" ip=172.19.0.1 latenc
When using absolute path:
localai-1 | 9:43PM ERR config is not valid
Additional context
The text was updated successfully, but these errors were encountered:
LocalAI version:
docker image version:
localai/localai:latest-gpu-nvidia-cuda-12
Environment, CPU architecture, OS, and Version:
Linux gcgpu3 5.10.0-33-cloud-amd64 #1 SMP Debian 5.10.226-1 (2024-10-03) x86_64 GNU/Linux
Describe the bug
I would like to load a sentencetransformer model from a local file/directory, so I can use LocalAI in a machine without internet. To do this I downloaded a huggingfacemodel to a flash drive, and then transfer these files to the machine. Currently, I cannot do this as when setting the
model
parameter to where the local model is located (which is under /build/models), it fails with an error saying "path not found"Also, when using an absolute path (eg:
/build/models/path-to-model
), it also fails when it is parsing the config file.To Reproduce
Download a model's directory from huggingface, for my testing I was using https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1. I then store this in a
models/mxbai-embed-large-v1
directory.docker-compose.yml file:
Setup this config file:
config.yaml
Then
docker compose up
and call the embedding endpointExpected behavior
Correct model path to be used.
Logs
When using path relative to the models path
When using absolute path:
Additional context
The text was updated successfully, but these errors were encountered: