-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding problem #5
base: master
Are you sure you want to change the base?
Conversation
…itialization for self.D in RNN_Cell.py
…code for adding problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please clean up unused prints and commented lines.
# print("self.UppT: ", self.UppT) | ||
# print("self.M: ", self.M) | ||
# print("self.alphas[0]: ", self.alphas[0].size()) | ||
# print("self.thetas: ", self.thetas) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove prints that are commented
@@ -84,6 +86,45 @@ def adding_problem_generator(N, seq_len=6, high=1, number_of_ones=2): | |||
X = np.append(X_num, X_mask, axis=2) | |||
return torch.FloatTensor(X), torch.FloatTensor(Y) | |||
|
|||
|
|||
def generate_copying_sequence(T, labels, c_length): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this used for in the adding problem?
self.log_P.grad = (frechet_deriv - frechet_deriv.t()).triu(diagonal=1) | ||
optimizer.step() | ||
self.P.data = self._B(False) | ||
self.P.grad.data.zero_() | ||
|
||
# def get_theta_list(self): | ||
# return self.theta_list | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove commented functions and test prints
# print("y: ", y.size(), y[0], x[0]) | ||
# print("onehot", args.onehot) | ||
# print("x: ", x.size(), "y: ", y.size()) | ||
# print("x batch size: ", x.shape[1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unnecessary prints
print(name, param.data) | ||
# for name, param in net.named_parameters(): | ||
# if param.requires_grad: | ||
# print(name, param.data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove comment lines
@@ -159,7 +161,7 @@ def forward(self, input,hidden): | |||
# Load data | |||
############################################################################### | |||
|
|||
corpus = Corpus('./data/pennchar/') | |||
corpus = Corpus('//home/madhusudhan/ptb-data/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove references to your own file system
@@ -305,9 +307,9 @@ def train(optimizer, orthog_optimizer): | |||
try: | |||
exp_time = "{0:%Y-%m-%d}_{0:%H-%M-%S}".format(datetime.now()) | |||
SAVEDIR = os.path.join('./saves', | |||
'sMNIST', | |||
'language-task', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fixed in the newest master version. Merge master again.
scipy==1.3.1 | ||
torch==1.1.0 | ||
numpy==1.16.4 | ||
tensorboardX==1.9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
Implemented the adding problem and compared the results of nnRNN, RNN, LSTM and expRNN.