Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to filetune falcon-7b with int8 #223

Closed
msinha251 opened this issue Jun 22, 2023 · 3 comments
Closed

Unable to filetune falcon-7b with int8 #223

msinha251 opened this issue Jun 22, 2023 · 3 comments

Comments

@msinha251
Copy link

msinha251 commented Jun 22, 2023

Hi,

I am trying to finetune falcon with only in8 engine but it's ending up with below error, any idea ?

Also unable to fine-tune base model falcon. Ending up with cuda out of memory error.

Details:
Machine: g5.48xlarge ec2 machine (8 gpu, 22gb each)
xTuring: 0.1.5
Torch: 2.0.1+cu117

Code:

from xturing.datasets.instruction_dataset import InstructionDataset
from xturing.models import BaseModel
import os

instruction_dataset = InstructionDataset("alpaca_data")

# Initializes the model
model = BaseModel.create("falcon_int8")

# Finetuned the model
model.finetune(dataset=instruction_dataset)

# Save the model
model.save("falcon_weights_int8")

error:
RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.

@StochasticRomanAgeev
Copy link
Contributor

Hi @msinha251,
Can you please provide full error trace for this model?

@StochasticRomanAgeev
Copy link
Contributor

Hi again @msinha251,
Generic fix for this issue is to set CUDA_VISIBLE_DEVICES=0

@msinha251
Copy link
Author

Thanks for the response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants