You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
x = mx.nd.array([1,2,3,4])
x.attach_grad()
with mx.autograd.record():
y = x * x + 1
y.backward()
print(x.grad)
I try to implements C++ version for the above example . and the code is:
void test()
{
Context device(DeviceType::kCPU, 0);
NDArray x = NDArray(Shape(3, 4), device, true);
NDArray grad_x = NDArray(Shape(3, 4), device, true);
mx_uint array[1] = {1};
void* p_x[1] = { static_cast<void*>(&x)};
void* p_grad_x[1] = { static_cast<void*>(&grad_x)};
MXAutogradMarkVariables(1, p_x, array, p_grad_x); // the corresponding python code is: x.attach_grad()
MXNotifyShutdown();
}
but i get a Segmentation fault (core dumped). can mxnet provide a c++ version example of the autograd ?
#######the gdb information is
(gdb) bt
#0 0x0000000000406126 in __gnu_cxx::__exchange_and_add(int volatile*, int) () #1 0x00000000004061bb in __gnu_cxx::__exchange_and_add_dispatch(int*, int) () #2 0x00000000004280db in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() () #3 0x00007ffff2325dc2 in mxnet::autograd::AutogradRuntime::MarkVariables(std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<unsigned int, std::allocator > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&) () from ./libmxnet.so #4 0x00007ffff26445fc in MXAutogradMarkVariables () from ./libmxnet.so #5 0x000000000040ede8 in test() () #6 0x000000000040ee90 in main ()
(gdb) f 3 #3 0x00007ffff2325dc2 in mxnet::autograd::AutogradRuntime::MarkVariables(std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<unsigned int, std::allocator > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&) () from ./libmxnet.so
The text was updated successfully, but these errors were encountered:
This issue is closed due to lack of activity in the last 90 days. Feel free to ping me to reopen if this is still an active issue. Thanks!
Also, do please check out our forum (and Chinese version) for general "how-to" questions.
the example of autograd in python:
x = mx.nd.array([1,2,3,4])
x.attach_grad()
with mx.autograd.record():
y = x * x + 1
y.backward()
print(x.grad)
I try to implements C++ version for the above example . and the code is:
void test()
{
Context device(DeviceType::kCPU, 0);
NDArray x = NDArray(Shape(3, 4), device, true);
NDArray grad_x = NDArray(Shape(3, 4), device, true);
mx_uint array[1] = {1};
void* p_x[1] = { static_cast<void*>(&x)};
void* p_grad_x[1] = { static_cast<void*>(&grad_x)};
MXAutogradMarkVariables(1, p_x, array, p_grad_x); // the corresponding python code is: x.attach_grad()
MXNotifyShutdown();
}
but i get a Segmentation fault (core dumped). can mxnet provide a c++ version example of the autograd ?
#######the gdb information is
(gdb) bt
#0 0x0000000000406126 in __gnu_cxx::__exchange_and_add(int volatile*, int) ()
#1 0x00000000004061bb in __gnu_cxx::__exchange_and_add_dispatch(int*, int) ()
#2 0x00000000004280db in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#3 0x00007ffff2325dc2 in mxnet::autograd::AutogradRuntime::MarkVariables(std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<unsigned int, std::allocator > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&) () from ./libmxnet.so
#4 0x00007ffff26445fc in MXAutogradMarkVariables () from ./libmxnet.so
#5 0x000000000040ede8 in test() ()
#6 0x000000000040ee90 in main ()
(gdb) f 3
#3 0x00007ffff2325dc2 in mxnet::autograd::AutogradRuntime::MarkVariables(std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&, std::vector<unsigned int, std::allocator > const&, std::vector<mxnet::NDArray*, std::allocatormxnet::NDArray* > const&) () from ./libmxnet.so
The text was updated successfully, but these errors were encountered: