We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traceback (most recent call last): File "test_cade.py", line 66, in cae.train(train_dataX_np, y_train, cae_lambda_1, cae_batch_size, cae_epochs, cae_similar_ratio, cae_margin, File "/home/lhd/uncertainity_malware/OOD/cade/autoencoder.py", line 315, in train train_op = self.optimizer.minimize(loss=lambda: loss, var_list=tf.compat.v1.trainable_variables()) File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize return self.apply_gradients(grads_and_vars, name=name) File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 640, in apply_gradients grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars) File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/utils.py", line 73, in filter_empty_gradients raise ValueError(f"No gradients provided for any variable: {variable}. " ValueError: No gradients provided for any variable: (['encoder_0/kernel:0', 'encoder_0/bias:0', 'encoder_1/kernel:0', 'encoder_1/bias:0', 'encoder_2/kernel:0', 'encoder_2/bias:0', 'encoder_3/kernel:0', 'encoder_3/bias:0', 'decoder_3/kernel:0', 'decoder_3/bias:0', 'decoder_2/kernel:0', 'decoder_2/bias:0', 'decoder_1/kernel:0', 'decoder_1/bias:0', 'decoder_0/kernel:0', 'decoder_0/bias:0'],). Provided grads_and_vars is ((None, <tf.Variable 'encoder_0/kernel:0' shape=(10000, 200) dtype=float32>), (None, <tf.Variable 'encoder_0/bias:0' shape=(200,) dtype=float32>), (None, <tf.Variable 'encoder_1/kernel:0' shape=(200, 128) dtype=float32>), (None, <tf.Variable 'encoder_1/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'encoder_2/kernel:0' shape=(128, 64) dtype=float32>), (None, <tf.Variable 'encoder_2/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'encoder_3/kernel:0' shape=(64, 2) dtype=float32>), (None, <tf.Variable 'encoder_3/bias:0' shape=(2,) dtype=float32>), (None, <tf.Variable 'decoder_3/kernel:0' shape=(2, 64) dtype=float32>), (None, <tf.Variable 'decoder_3/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'decoder_2/kernel:0' shape=(64, 128) dtype=float32>), (None, <tf.Variable 'decoder_2/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'decoder_1/kernel:0' shape=(128, 200) dtype=float32>), (None, <tf.Variable 'decoder_1/bias:0' shape=(200,) dtype=float32>), (None, <tf.Variable 'decoder_0/kernel:0' shape=(200, 10000) dtype=float32>), (None, <tf.Variable 'decoder_0/bias:0' shape=(10000,) dtype=float32>)).
grads_and_vars
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Traceback (most recent call last):
File "test_cade.py", line 66, in
cae.train(train_dataX_np, y_train, cae_lambda_1, cae_batch_size, cae_epochs, cae_similar_ratio, cae_margin,
File "/home/lhd/uncertainity_malware/OOD/cade/autoencoder.py", line 315, in train
train_op = self.optimizer.minimize(loss=lambda: loss, var_list=tf.compat.v1.trainable_variables())
File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize
return self.apply_gradients(grads_and_vars, name=name)
File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 640, in apply_gradients
grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
File "/home/lhd/.local/lib/python3.8/site-packages/keras/optimizers/optimizer_v2/utils.py", line 73, in filter_empty_gradients
raise ValueError(f"No gradients provided for any variable: {variable}. "
ValueError: No gradients provided for any variable: (['encoder_0/kernel:0', 'encoder_0/bias:0', 'encoder_1/kernel:0', 'encoder_1/bias:0', 'encoder_2/kernel:0', 'encoder_2/bias:0', 'encoder_3/kernel:0', 'encoder_3/bias:0', 'decoder_3/kernel:0', 'decoder_3/bias:0', 'decoder_2/kernel:0', 'decoder_2/bias:0', 'decoder_1/kernel:0', 'decoder_1/bias:0', 'decoder_0/kernel:0', 'decoder_0/bias:0'],). Provided
grads_and_vars
is ((None, <tf.Variable 'encoder_0/kernel:0' shape=(10000, 200) dtype=float32>), (None, <tf.Variable 'encoder_0/bias:0' shape=(200,) dtype=float32>), (None, <tf.Variable 'encoder_1/kernel:0' shape=(200, 128) dtype=float32>), (None, <tf.Variable 'encoder_1/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'encoder_2/kernel:0' shape=(128, 64) dtype=float32>), (None, <tf.Variable 'encoder_2/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'encoder_3/kernel:0' shape=(64, 2) dtype=float32>), (None, <tf.Variable 'encoder_3/bias:0' shape=(2,) dtype=float32>), (None, <tf.Variable 'decoder_3/kernel:0' shape=(2, 64) dtype=float32>), (None, <tf.Variable 'decoder_3/bias:0' shape=(64,) dtype=float32>), (None, <tf.Variable 'decoder_2/kernel:0' shape=(64, 128) dtype=float32>), (None, <tf.Variable 'decoder_2/bias:0' shape=(128,) dtype=float32>), (None, <tf.Variable 'decoder_1/kernel:0' shape=(128, 200) dtype=float32>), (None, <tf.Variable 'decoder_1/bias:0' shape=(200,) dtype=float32>), (None, <tf.Variable 'decoder_0/kernel:0' shape=(200, 10000) dtype=float32>), (None, <tf.Variable 'decoder_0/bias:0' shape=(10000,) dtype=float32>)).The text was updated successfully, but these errors were encountered: