Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jointly trained questions #1

Open
zhangyxrepo opened this issue Nov 12, 2024 · 3 comments
Open

Jointly trained questions #1

zhangyxrepo opened this issue Nov 12, 2024 · 3 comments

Comments

@zhangyxrepo
Copy link

Hi @AndyJZhao, thank you for open-sourcing such a great job, I would like to ask if you have tested the performance of GraphAny when it is trained on a more extensive training regimen that involves some graphs coming from diverse domains? I'm asking because Fig1 is impressive, it also makes intuitive sense that the model’s performance might improve if it sees a wider distribution of data during training.

Looking forward to your reply. Thank you.

@AndyJZhao
Copy link
Collaborator

AndyJZhao commented Nov 20, 2024

Hi, thanks for your interest in our work.
We haven't tested our model on other datasets beyond the reported 31 graphs.

It is also possible to train on multiple graphs together with the current implementation. Yet, in our previous exploration, it did not improve the performance.

@zhangyxrepo
Copy link
Author

Hi, thanks for your interest in our work. We haven't tested our model on other datasets beyond the reported 31 graphs.

It is also possible to train on multiple graphs together with the current implementation. Yet, in our previous exploration, it did not improve the performance.

Hi, @AndyJZhao, Thank you for your prompt and clear reply, it answered my question.

It is also possible to train on multiple graphs together with the current implementation. Yet, in our previous exploration, it did not improve the performance.

Can you provide the corresponding instruction that you used to perform joint training on multiple graphs with the current implementation? Thanks again.

@AndyJZhao
Copy link
Collaborator

The general logic is to modify the configs/data.yaml and create your new training setting, e.g.

  train_on_cora_and_citeseer:
    train: [ Cora, Citeseer]
    eval: ${_all_datasets}

then set dataset=train_on_cora_and_citeseer in the command.
I have abandoned the pipeline for joint training of multiple datasets for a long time. The code might not work and likely contains bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants