Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Inference] Add models CodeLlama-7b and falcon-7b #12

Merged
merged 7 commits into from
Jan 15, 2024

Conversation

Deegue
Copy link
Contributor

@Deegue Deegue commented Dec 21, 2023

No description provided.

harborn pushed a commit to harborn/llm-on-ray that referenced this pull request Dec 25, 2023
* add testing scripts

* remove temp-dir for worker

* remove test files

* add redpajama dp code

* ignore all notebook files

* update streaming code

* add write-on-host for streaming

* better line alignment

* move files

* rename folder

* rename folder and add group_files

* debug

* add recovery test scripts

* add additional python packages

* add test flag

* add README and some minor fixes

* change the image name

* change the directory back

* add training stop for the second

* fix typo

* add data source support

* clean up a bit

* restructure folders

* restructure files

* add script headers

* reorder and add READMEs

* revert back due to file movements

* fix typo

* fix lib import

* enable mounting localdisk

* change name of cc

* fix dtype

* performance optimization for streaming

* use the latest ray

* change node

* add new files

* bug fix

* add nltk

* fix hdfs after re-order folders

* set default to false

* use variables instead of credentials

* change the training config path

* update README
@Deegue Deegue requested a review from jiafuzha January 10, 2024 08:47
@Deegue
Copy link
Contributor Author

Deegue commented Jan 10, 2024

Gentle ping @jiafuzha for review, thanks~

@jiafuzha
Copy link
Contributor

you need to add the two models in the include list so that they can be verified in PR. Otherwise, they will only be verified in nightly.

include:
- { model: "gpt-j-6b"}
- { model: "mistral-7b-v0.1"}
- { model: "mpt-7b-bigdl"}
- { model:
- dtuner_model: nathan0/mpt-7b-deltatuner-model
model: mpt-7b

@Deegue
Copy link
Contributor Author

Deegue commented Jan 10, 2024

you need to add the two models in the include list so that they can be verified in PR. Otherwise, they will only be verified in nightly.

include: - { model: "gpt-j-6b"} - { model: "mistral-7b-v0.1"} - { model: "mpt-7b-bigdl"} - { model: - dtuner_model: nathan0/mpt-7b-deltatuner-model model: mpt-7b

Ic.. it passed in our previous repo. Let me trigger them here.

@Deegue
Copy link
Contributor Author

Deegue commented Jan 15, 2024

Gentle ping @jiafuzha for another review, thanks!

Comment on lines 48 to 49
- { model: "CodeLlama-7b-hf"}
- { model: "falcon-7b"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please review them before merging it since we've verified them successfully.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will remove it as soon as it's ready to merge.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

Copy link
Contributor

@jiafuzha jiafuzha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jiafuzha jiafuzha merged commit f3a7a9d into intel:main Jan 15, 2024
1 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants