-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyError: 'llama_lora_int4' #233
Comments
Hey @adastra9257, you would need to use the |
Hello @tushar2407, thank you for your reply! I couldn't find any documentation regarding |
Hey! Sure. Below is the working code. # Make the necessary imports
from xturing.datasets.instruction_dataset import InstructionDataset
from xturing.models import GenericLoraKbitModel, LlamaLoraKbit
from pytorch_lightning.loggers import WandbLogger
# Initializes WandB integration
wandb_logger = WandbLogger()
# Load your desired dataset
instruction_dataset = InstructionDataset("../llama/alpaca_data")
# Initialize the model
model = GenericLoraKbitModel('aleksickx/llama-7b-hf')
# OR
model = LLamaLoraKbit()
# Fine-tune the model on your desired dataset
model.finetune(dataset=instruction_dataset, logger=wandb_logger)
# Save the finetuned model
model.save('./finetuned_model') Hope this helps! |
I am learning to fine-tune LLaMA in INT4 with xTuring. I am using the LLaMA_lora_int4.ipynb file in the example folder. I encountered the following error during runtime:
KeyError: 'llama_lora_int4'
I have no idea why this error occurred. Can anyone help me? Thank you!
OS:
Ubuntu 22.04
This code was executed in JupyterLab:
This is the error log:
These are the dependencies installed in the environment listed by
pip freeze
:The text was updated successfully, but these errors were encountered: