Skip to content

Commit

Permalink
BUG: Fix Codestral v0.1 URI for Pytorch Format (xorbitsai#2590)
Browse files Browse the repository at this point in the history
  • Loading branch information
danialcheung authored Nov 27, 2024
1 parent 760f31f commit 0d4cb9c
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions doc/source/models/builtin/llm/codestral-v0.1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ Model Spec 1 (pytorch, 22 Billion)
- **Model Size (in billions):** 22
- **Quantizations:** 4-bit, 8-bit, none
- **Engines**: vLLM, Transformers (vLLM only available for quantization none)
- **Model ID:** mistralai/Mistral-7B-Instruct-v0.2
- **Model Hubs**: `Hugging Face <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2>`__
- **Model ID:** mistralai/Codestral-22B-v0.1
- **Model Hubs**: `Hugging Face <https://huggingface.co/mistralai/Codestral-22B-v0.1>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::
Expand Down
4 changes: 2 additions & 2 deletions xinference/model/llm/llm_family.json
Original file line number Diff line number Diff line change
Expand Up @@ -3411,8 +3411,8 @@
"8-bit",
"none"
],
"model_id": "mistralai/Mistral-7B-Instruct-v0.2",
"model_revision": "9552e7b1d9b2d5bbd87a5aa7221817285dbb6366"
"model_id": "mistralai/Codestral-22B-v0.1",
"model_revision": "8f5fe23af91885222a1563283c87416745a5e212"
},
{
"model_format": "ggufv2",
Expand Down

0 comments on commit 0d4cb9c

Please sign in to comment.