❓ Questions / Help / Support #194
kuruhuru
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Hi, Our models are not designed for learning. As for testing how our models efffect the learning of other models - this is out of scope of this repository. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
❓ Questions and Help
Hello!
I got strange error. Please, advice any fix
I have a model that works and successfully learns. No errors.
But after I run your code:
USE_ONNX = False # change this to True if you want to test onnx model
if USE_ONNX:
!pip install -q onnxruntime
vad_model, ut = torch.hub.load(repo_or_dir='snakers4/silero-vad',
model='silero_vad',
force_reload=True,
onnx=USE_ONNX)
My model starts to generate errors when learning.
110
111 model.zero_grad()
--> 112 loss.backward()
113
114 optimizer.step()
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
361 create_graph=create_graph,
362 inputs=inputs)
--> 363 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
364
365 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/init.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
173 Variable.execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
174 tensors, grad_tensors, retain_graph, create_graph, inputs,
--> 175 allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
176
177 def grad(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Beta Was this translation helpful? Give feedback.
All reactions