Issue with multihead replay fine-tuning MACE-OFF23 #803
lucasgarayy
started this conversation in
General
Replies: 1 comment
-
I think there might be a problem with the keys in pt head, what happens when you do use the |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I am attempting to fine-tune MACEOFF-23 on 4 different heads corresponding to various DFT theories using multihead replay fine-tuning. However, I'm encountering issues where the pt_head does not train properly. The initial loss remains at 0.00000, and the RMSE_E and RMSE_F values are significantly large throughout training.
Where the pt_train_file is taken from MACEOFF23 Training Data.
Log Output:
This run is configured for 2 epochs since processing the entire 1.51GB dataset is time-consuming. However, I have encountered the same issue when working with smaller subsets of the data and increasing the number of epochs.
Has anyone encountered a similar issue with multihead replay fine-tuning? Could this be due to:
Any insights, suggestions, or fixes would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions