-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT: TimeXer #1267
FEAT: TimeXer #1267
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, maybe the refactoring in a separate PR?
We can also do it separately though, it's non-blocking imho.
Small detail re. mint.json
.
Does it (somewhat) reproduce the paper results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Here are the results for ETTm1: Paper (avg for 4 horizons) neuralforecast (avg for 4 horizons) Although I was training for 1000 step with early stopping on all horizons and they all reached 1000 steps, so maybe training for longer would give closer scores |
Add TimeXer to neuralforecast. This is a transformer-based model that supports future exogenous features, and seems to perform very well in long-horizon forecasting according to here.
Original code implementation: https://github.com/thuml/TimeXer/tree/main
Paper: https://arxiv.org/abs/2402.19072
Note: The encoder and embedding are specific to TimeXer, that's why I am not reusing the existing common module from other models.