Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: timemixer shapes mismatch and doc update #1138

Merged
merged 7 commits into from
Sep 16, 2024

Conversation

marcopeix
Copy link
Contributor

Prevent shape mismatch in TimeMixer
Update docs: KAN supports exog feature, but currently it says that it doesn't support them

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Contributor

@elephaint elephaint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice - minor detail to make it consistent

neuralforecast/models/timemixer.py Outdated Show resolved Hide resolved
elephaint and others added 4 commits September 12, 2024 20:08
* Fix issue #950: Reduce TimeLLM setup time for training

* Restore changes on the examples

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* clear output

* modify test code

* Optimize model loading and add deprecation warning

- Simplify model loading logic
- Add constant for default model name
- Improve error handling for model loading
- Add success messages for model loading
- Implement deprecation warning for 'llm_config' and 'llm_tokenizer' parameters
- Update print messages for clarity
- Remove redundant code

This commit improves code readability, maintainability, and user experience
by providing clearer feedback and warnings about deprecated parameters.

* Resolved conflict in nbs/models.timellm.ipynb

---------

Co-authored-by: ive2go <[email protected]>
Co-authored-by: Olivier Sprangers <[email protected]>
Copy link
Contributor

@elephaint elephaint left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work!

@marcopeix marcopeix merged commit e55815c into main Sep 16, 2024
18 checks passed
@marcopeix marcopeix deleted the hotfix/timemixer_seasonmixing branch September 16, 2024 17:40
marcopeix added a commit that referenced this pull request Sep 16, 2024
* Use math.ceil to prevent shape mismatch

* Show exog support for KAN in doc

* FEAT: TimeLLM is faster and supports more LLMs (#1139)

* Fix issue #950: Reduce TimeLLM setup time for training

* Restore changes on the examples

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* clear output

* modify test code

* Optimize model loading and add deprecation warning

- Simplify model loading logic
- Add constant for default model name
- Improve error handling for model loading
- Add success messages for model loading
- Implement deprecation warning for 'llm_config' and 'llm_tokenizer' parameters
- Update print messages for clarity
- Remove redundant code

This commit improves code readability, maintainability, and user experience
by providing clearer feedback and warnings about deprecated parameters.

* Resolved conflict in nbs/models.timellm.ipynb

---------

Co-authored-by: ive2go <[email protected]>
Co-authored-by: Olivier Sprangers <[email protected]>

* Consistency with math.ceil

---------

Co-authored-by: Olivier Sprangers <[email protected]>
Co-authored-by: ive2go <[email protected]>
marcopeix added a commit that referenced this pull request Sep 18, 2024
* WIP - Add reversible mixture of kan

* WIP - Allows import of RMoK

* AutoRMoK, add it to doc, add parameters

* Fix tests

* Get default config of AutoRMoK

* FIX: timemixer shapes mismatch and doc update (#1138)

* Use math.ceil to prevent shape mismatch

* Show exog support for KAN in doc

* FEAT: TimeLLM is faster and supports more LLMs (#1139)

* Fix issue #950: Reduce TimeLLM setup time for training

* Restore changes on the examples

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Revert changes to nbs/models.ipynb, nbs/models.softs.ipynb and neuralforecast/_modidx.py

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* Refactor code to dynamically load models with AutoModel, AutoTokenizer, and AutoConfig

- Updated load_model_and_tokenizer function to use AutoModel, AutoTokenizer, and AutoConfig for flexible model loading.
- Included default model(gpt2) for cases where the specified model fails to load.
- Kept llm, llm_config, and llm_tokenizer arguments to minimize changes.
- Changed llm from storing pretrained weights to accepting pretrained model path to reduce necessary modifications.

This update enhances the flexibility and reliability of model loading based on received feedback while minimizing necessary changes.

* clear output

* modify test code

* Optimize model loading and add deprecation warning

- Simplify model loading logic
- Add constant for default model name
- Improve error handling for model loading
- Add success messages for model loading
- Implement deprecation warning for 'llm_config' and 'llm_tokenizer' parameters
- Update print messages for clarity
- Remove redundant code

This commit improves code readability, maintainability, and user experience
by providing clearer feedback and warnings about deprecated parameters.

* Resolved conflict in nbs/models.timellm.ipynb

---------

Co-authored-by: ive2go <[email protected]>
Co-authored-by: Olivier Sprangers <[email protected]>

* Consistency with math.ceil

---------

Co-authored-by: Olivier Sprangers <[email protected]>
Co-authored-by: ive2go <[email protected]>

* Add image, docstring, fix typo in comment

---------

Co-authored-by: Olivier Sprangers <[email protected]>
Co-authored-by: ive2go <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants