You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed a weird way of invoking the TranformerLayers in SAKT model as per the below code snippet, you are only getting the output of the last layer applied on the seq_data input, you should either combine the pool or add y as input per each call!
y = seq_data for block in self.attn_blocks: y = block(mask=1, query=q_data, key=seq_data, values=seq_data)
The text was updated successfully, but these errors were encountered:
Thanks for pointing out this issue! It is indeed a mistake and has been fixed in the latest commit. Meanwhile, considering that the original paper only uses a single transformer layer, our previous implementation just works the same. So the experimental results are still reliable.
I noticed a weird way of invoking the TranformerLayers in SAKT model as per the below code snippet, you are only getting the output of the last layer applied on the seq_data input, you should either combine the pool or add y as input per each call!
y = seq_data for block in self.attn_blocks: y = block(mask=1, query=q_data, key=seq_data, values=seq_data)
The text was updated successfully, but these errors were encountered: