You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just finished reading your paper, and I notice that it is an on policy method.
And I wondering if anyone has tested it with an rl method that has a replay_buff pool.
As far as I know, for off-policy method with RNN structure(like lstm, gru or attention or transformer...), if hidden state is stored with a sample (s,a,r,s'), the hidden state would become a stale data after a long training--- Is this issue conqured with adaptive-transformer?
The text was updated successfully, but these errors were encountered:
The algorithm we use is IMPALA which uses Vtrace targets– this is an instance of off-policy learning.
As far as adaptive transformer is concerned, it just makes sure that the attention context length you use is not fixed but rather learnt over the training sequence. Now the transformer-xl used in our experiments takes care of the hidden states for the previous (state, action) pairs, similar to the role of a replay buffer you're pointing out.
I just finished reading your paper, and I notice that it is an on policy method.
And I wondering if anyone has tested it with an rl method that has a replay_buff pool.
As far as I know, for off-policy method with RNN structure(like lstm, gru or attention or transformer...), if hidden state is stored with a sample (s,a,r,s'), the hidden state would become a stale data after a long training--- Is this issue conqured with adaptive-transformer?
The text was updated successfully, but these errors were encountered: