Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zsh: killed python export.py #120

Open
aPaleBlueDot opened this issue Sep 4, 2024 · 3 comments
Open

zsh: killed python export.py #120

aPaleBlueDot opened this issue Sep 4, 2024 · 3 comments

Comments

@aPaleBlueDot
Copy link

(coreml) hg@007 Mistral7B % python export.py
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:48<00:00, 16.07s/it]
Converting PyTorch Frontend ==> MIL Ops: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 5575/5575 [00:02<00:00, 1958.56 ops/s]
Running MIL frontend_pytorch pipeline: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 6.52 passes/s]
Running MIL default pipeline: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 79/79 [11:06<00:00, 8.44s/ passes]
Running MIL backend_mlprogram pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 17.32 passes/s]
zsh: killed python export.py

@antmikinka
Copy link

Possible hardware limitation? I occured a lot of issues with zsh killing processes due to limited RAM or storage while converting models. Could you provide more details regarding your processes to help troubleshoot you?

@aPaleBlueDot
Copy link
Author

aPaleBlueDot commented Sep 10, 2024

Indeed, that's my guess as well, based on two runs ending at different times, where the longer lasting one had more disk space available. Also noticed another GH issue on the same topic where their run lasted much shorter. I'm on an M1 with 16GB RAM, and made 100GB disk space available on my 3rd try.

@aPaleBlueDot
Copy link
Author

Are PyTorch checkpoints required or does safetsensors only HF repos work for the following line?

torch_model = StatefulMistralForCausalLM(MODEL_ID, max_context_size=max_context_size)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants