Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise ARIMA parameter limits from 4 to 8 #4022

Merged
merged 2 commits into from
Jul 11, 2021

Conversation

Nyrio
Copy link
Contributor

@Nyrio Nyrio commented Jul 1, 2021

Resolves #3915

This PR still doesn't allow arbitrary parameters (because of the Jones transform kernel that uses the parameters as a template argument in order to work with registers).
But fixing the limit to 8 instead of 4 should be enough. In most cases, the parameters don't exceed 3 or 4, and more complex models don't give better results.

I also added tests with p=5.

@Nyrio Nyrio requested review from a team as code owners July 1, 2021 11:36
@github-actions github-actions bot added CUDA/C++ Cython / Python Cython or Python issue labels Jul 1, 2021
@Nyrio Nyrio added 3 - Ready for Review Ready for review by team CUDA / C++ CUDA issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change and removed CUDA/C++ labels Jul 1, 2021
@tfeher tfeher self-assigned this Jul 9, 2021
Copy link
Contributor

@tfeher tfeher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Louis for this PR, it looks good to me.

Do I understand correctly, that one could (in theory) use the jones_transform_kernel with higher VALUE template parameter? So setting the value to 8 is not limited by the kernel resource usage, instead it is chosen so because higher values are not useful in practice?
Could you update the PR description to clarify this?

@Nyrio
Copy link
Contributor Author

Nyrio commented Jul 9, 2021

Do I understand correctly, that one could (in theory) use the jones_transform_kernel with higher VALUE template parameter? So setting the value to 8 is not limited by the kernel resource usage, instead it is chosen so because higher values are not useful in practice?

It's a bit of both. With the current implementation using a template parameter, we have to choose an arbitrary limit, and 8 should be more than enough (in practise one rarely needs more than 3 or 4)

@dantegd
Copy link
Member

dantegd commented Jul 11, 2021

The HDBSCAN error found in CI I think is fixed now, so rerunning tests

@dantegd
Copy link
Member

dantegd commented Jul 11, 2021

rerun tests

@dantegd
Copy link
Member

dantegd commented Jul 11, 2021

@gpucibot merge

@rapids-bot rapids-bot bot merged commit a09aa4c into rapidsai:branch-21.08 Jul 11, 2021
@codecov-commenter
Copy link

Codecov Report

❗ No coverage uploaded for pull request base (branch-21.08@bcc4cad). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@               Coverage Diff               @@
##             branch-21.08    #4022   +/-   ##
===============================================
  Coverage                ?   85.59%           
===============================================
  Files                   ?      230           
  Lines                   ?    18349           
  Branches                ?        0           
===============================================
  Hits                    ?    15705           
  Misses                  ?     2644           
  Partials                ?        0           
Flag Coverage Δ
dask 48.14% <0.00%> (?)
non-dask 77.97% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bcc4cad...c89e677. Read the comment docs.

vimarsh6739 pushed a commit to vimarsh6739/cuml that referenced this pull request Oct 9, 2023
Resolves rapidsai#3915 

This PR still doesn't allow arbitrary parameters (because of the Jones transform kernel that uses the parameters as a template argument in order to work with registers).
But fixing the limit to 8 instead of 4 should be enough. In most cases, the parameters don't exceed 3 or 4, and more complex models don't give better results.

I also added tests with `p=5`.

Authors:
  - Louis Sugy (https://github.com/Nyrio)

Approvers:
  - Tamas Bela Feher (https://github.com/tfeher)
  - Dante Gama Dessavre (https://github.com/dantegd)

URL: rapidsai#4022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review Ready for review by team CUDA / C++ CUDA issue CUDA/C++ Cython / Python Cython or Python issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ARIMA parameter tuning
4 participants