Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use a local model for sentencetransformers #4529

Open
cesar-cortez5 opened this issue Jan 2, 2025 · 0 comments
Open

Cannot use a local model for sentencetransformers #4529

cesar-cortez5 opened this issue Jan 2, 2025 · 0 comments
Labels
bug Something isn't working unconfirmed

Comments

@cesar-cortez5
Copy link

cesar-cortez5 commented Jan 2, 2025

LocalAI version:
docker image version: localai/localai:latest-gpu-nvidia-cuda-12

Environment, CPU architecture, OS, and Version:
Linux gcgpu3 5.10.0-33-cloud-amd64 #1 SMP Debian 5.10.226-1 (2024-10-03) x86_64 GNU/Linux

Describe the bug
I would like to load a sentencetransformer model from a local file/directory, so I can use LocalAI in a machine without internet. To do this I downloaded a huggingfacemodel to a flash drive, and then transfer these files to the machine. Currently, I cannot do this as when setting the model parameter to where the local model is located (which is under /build/models), it fails with an error saying "path not found"

Also, when using an absolute path (eg: /build/models/path-to-model), it also fails when it is parsing the config file.

To Reproduce

Download a model's directory from huggingface, for my testing I was using https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1. I then store this in a models/mxbai-embed-large-v1 directory.

docker-compose.yml file:

image: localai/localai:latest-gpu-nvidia-cuda-12
ports:
    - 8080:8080
environment:
    DEBUG=true
volumes:
    - ./models:/build/models

Setup this config file:
config.yaml

name: text-embedding-ada-002
backend: sentencetransformers
parameters:
    model: ./mxbai-embed-large-v1
embeddings: true

Then docker compose up and call the embedding endpoint

Expected behavior
Correct model path to be used.

Logs
When using path relative to the models path

localai-1  | 9:47PM ERR Server error error="failed to load model with internal loader: could not load model (no success): Unexpected err=ValueError('Path ./mxbai-embed-large-v1'), type(err)=<class 'ValueError'>" ip=172.19.0.1 latenc

When using absolute path:

localai-1  | 9:43PM ERR config is not valid                                                                                                                   

Additional context

@cesar-cortez5 cesar-cortez5 added bug Something isn't working unconfirmed labels Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

1 participant