Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to test the model on several time series (for example 6000) concurrently? #11

Open
Lanxin1011 opened this issue Nov 13, 2024 · 2 comments

Comments

@Lanxin1011
Copy link

Dear Authors,
Thanks for the great work! Currently I'm working into the codes and paper of your article, and try to fine-tune a model based on TEMPO using custom datasets. However, I'm stuck in two problems. The first one is that I'm not able to predict on several time series at a time. Hence I'm wondering if I can predict several time series concurrently just like what it does in Neuralforecast where each time series are distinguished by 'unique_id'. (Currently I can only set target_data with a time series dataset belonging to a 'cate_id' at a time, which is a little kind of inefficient==) The second problem is that I would like to predict the 'y' of next H steps but the flag 'pred' seems not to be able to predict without ground truth. Could you help me out on the above two issues?

Really looking forward to your reply! Thanks a lllottt!

Best regards

@idevede
Copy link
Contributor

idevede commented Nov 19, 2024

Hi Lanxin1011,

Thanks again for your interest! For the first problem, we are also working on accessing TEMPO in Neuralforecast. But currently, we would like to suggest you work on the test data loader part, where you can get a sample with [b, l, n] (b is batch size, l is time series length and n is the feature size, which is 6000 in your case) instead of [b, l, 1]. Then you can modify the data_provider.py in __get_item__ function

if self.set_type == 2:
            index = self.use_index[orgindex]
            seq_x = self.main_data[index:index+self.seq_len-self.pred_len]
            seq_x = torch.tensor(seq_x, dtype=torch.float32)
            ...
else:
            index = orgindex//self.enc_in
            feat_id = orgindex%self.enc_in
            index = self.use_index[index]
            seq_x = self.main_data[index:index+self.seq_len-self.pred_len, feat_id:feat_id+1]
            seq_x = torch.tensor(seq_x, `dtype=torch.float32)`
            ...

After that, you can modify the __len__ function:

def __len__(self):
        if self.set_type == 2:
            return len(self.use_index)
        return len(self.use_index)*self.enc_in    

Then we can change the test function, where you can find the example: https://github.com/DC-research/TEMPO/blob/main/utils/tools.py#L538 and contact them together: https://github.com/DC-research/TEMPO/blob/main/utils/tools.py#L563_L565

Hope this would help you in your practice.

Best

@idevede
Copy link
Contributor

idevede commented Nov 19, 2024

For the second problem, can you give me an example? I think TEMPO can make the prediction without the ground truth. LOL

Best.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants