Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Be able to launch multiple evaluations in parallel #37

Open
fhoering opened this issue Jul 30, 2019 · 0 comments
Open

Be able to launch multiple evaluations in parallel #37

fhoering opened this issue Jul 30, 2019 · 0 comments

Comments

@fhoering
Copy link
Contributor

Currently train_and_evaluate evaluation is done in a thread that always reads the latest model.
https://github.com/tensorflow/estimator/blob/master/tensorflow_estimator/python/estimator/training.py#L798

Using a distribution strategy for evaluation doesn't seem to work well.

We could split up the train_and_evaluate function, call distributed training with the cluster_spec and do evaluation separately via calling the estimator.evaluate function.

evaluate(
    input_fn,
    steps=None,
    hooks=None,
    checkpoint_path=None,
    name=None
)

This would allow to spawn many evaluators with skein, give each a different checkpoint path and do evaluation on different checkpoints in parallel.

@fhoering fhoering changed the title Be able to launch multiple Evaluation in parallel Be able to launch multiple evaluations in parallel Jul 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant