You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your excellent work! I have a question regarding the article titled "Frequency-domain MLPs are More Effective Learners in Time Series Forecasting." Could you please explain the difference between the EinFFT and the FreMLP operations mentioned in the paper?
The text was updated successfully, but these errors were encountered:
The only difference that I could find was that simba will group the sequence into chunks and then only run FFT on those chunks. For example, if I have a sequence of length 6, then I would chunk it into two sequences of length 3, creating a 2x3 grid, and then run two orthogonal FFTs. This would be a 2d FFT, which would make sense for images and other multidimensional data, but simba destroys the spacial structure by flattening the signal, which is even more confusing. Reading the paper I couldn't understand if there was a mathematical reason for why they did what they did with the chunking of a 1D sequence into a 2D sequence. Some insight into that would be nice.
Thank you very much for your excellent work! I have a question regarding the article titled "Frequency-domain MLPs are More Effective Learners in Time Series Forecasting." Could you please explain the difference between the EinFFT and the FreMLP operations mentioned in the paper?
The text was updated successfully, but these errors were encountered: