-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT: Add RMoK #1148
FEAT: Add RMoK #1148
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does our implementation reproduce the results of the paper?
Yes, tested on ETTm1 and ETTm2 with horizon of 96. For ETTm1:
For ETTm2:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, minor typo (I think it's because of the pasted comments)
* Use math.ceil to prevent shape mismatch * Show exog support for KAN in doc * FEAT: TimeLLM is faster and supports more LLMs (#1139) * Fix issue #950: Reduce TimeLLM setup time for training * Restore changes on the examples * Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py * Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py * Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig - Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading. - Included default model(gpt2) for cases where the specified model fails to load. - Kept llm, llm_config, and llm_tokenizer arguments to minimize changes. - Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications. This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes. * Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig - Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading. - Included default model(gpt2) for cases where the specified model fails to load. - Kept llm, llm_config, and llm_tokenizer arguments to minimize changes. - Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications. This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes. * clear output * modify test code * Optimize model loading and add deprecation warning - Simplify model loading logic - Add constant for default model name - Improve error handling for model loading - Add success messages for model loading - Implement deprecation warning for 'llm_config' and 'llm_tokenizer' parameters - Update print messages for clarity - Remove redundant code This commit improves code readability, maintainability, and user experience by providing clearer feedback and warnings about deprecated parameters. * Resolved conflict in nbs/models.timellm.ipynb --------- Co-authored-by: ive2go <[email protected]> Co-authored-by: Olivier Sprangers <[email protected]> * Consistency with math.ceil --------- Co-authored-by: Olivier Sprangers <[email protected]> Co-authored-by: ive2go <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
RMoK (reversible mixture of KAN) leverages different KAN layers for time series forecasting.
From this article.
Adapted from the official repo.