Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Complie error (not the mxnet itself. but cpp interface) #12543

Closed
s0302102 opened this issue Sep 13, 2018 · 5 comments
Closed

Complie error (not the mxnet itself. but cpp interface) #12543

s0302102 opened this issue Sep 13, 2018 · 5 comments
Labels
Build C++ Related to C++

Comments

@s0302102
Copy link

Description

compile with cpp-package ok but when use the libmxnet.so from cpp, some header files go wrong

Environment info (Required)

----------Python Info----------
('Version :', '2.7.12')
('Compiler :', 'GCC 5.4.0 20160609')
('Build :', ('default', 'Dec 4 2017 14:50:18'))
('Arch :', ('64bit', 'ELF'))
------------Pip Info-----------
('Version :', '10.0.1')
('Directory :', '/usr/local/lib/python2.7/dist-packages/pip')
----------MXNet Info-----------
('Version :', '1.2.0')
('Directory :', '/usr/local/lib/python2.7/dist-packages/mxnet')
('Commit Hash :', '73d879cf6439eb83b337fcbf6c743dbf385b9766')
----------System Info----------
('Platform :', 'Linux-4.15.0-34-generic-x86_64-with-Ubuntu-16.04-xenial')
('system :', 'Linux')
('node :', 'xyliu-B250M-D3H')
('release :', '4.15.0-34-generic')
('version :', '#37~16.04.1-Ubuntu SMP Tue Aug 28 10:44:06 UTC 2018')
----------Hardware Info----------
('machine :', 'x86_64')
('processor :', 'x86_64')
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 158
Model name: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
Stepping: 9
CPU MHz: 4099.257
CPU max MHz: 4200.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0008 sec, LOAD: 1.2187 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0022 sec, LOAD: 3.9907 sec.
Error open FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, <urlopen error ('_ssl.c:574: The handshake operation timed out',)>, DNS finished in 0.261016130447 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0031 sec, LOAD: 0.9128 sec.
Error open Gluon Tutorial(en): http://gluon.mxnet.io, <urlopen error ('_ssl.c:574: The handshake operation timed out',)>, DNS finished in 0.618535995483 sec.
Error open Gluon Tutorial(cn): https://zh.gluon.ai, <urlopen error ('_ssl.c:574: The handshake operation timed out',)>, DNS finished in 1.5422270298 sec.

Package used (Python/R/Scala/Julia):
(I'm using cpp-package)

For Scala user, please provide:

  1. Java version: (java -version)
  2. Maven version: (mvn -version)
  3. Scala runtime if applicable: (scala -version)

For R user, please provide R sessionInfo():

Build info (Required if built from source)

Compiler (gcc/clang/mingw/visual studio):
g++
MXNet commit hash:
(Paste the output of git rev-parse HEAD here.)
597a637
Build config:
(Paste the content of config.mk, or the build command.)
compile OK

Error Message:

typical errors:
3rParty/dmlc-core/include/dmlc/base.h:245:1: error: template with C linkage template

3rParty/dmlc-core/include/dmlc/base.h:282:14: error: no match for 'operator []'(operand types are 'const string {aka const std::__cxx11::basic_string }' and 'int') return &str[0]

/usr/include/x86_64-linux-gnu/bits/waitstatus.h:79:27:error:redeclaration of 'unsigned int wait::::__w_retcode' unsigned int __w_retcode:8;
and other similar errors.

Minimum reproducible example

cpp code example:

#include

#include "mxnet-cpp/MxNetCpp.h"

using namespace std;
using namespace mxnet::cpp;

Symbol mlp(const vector &layers)
{
auto x = Symbol::Variable("X");
auto label = Symbol::Variable("label");

vector<Symbol> weights(layers.size());
vector<Symbol> biases(layers.size());
vector<Symbol> outputs(layers.size());

for (size_t i = 0; i < layers.size(); ++i)
{
    weights[i] = Symbol::Variable("w" + to_string(i));
    biases[i] = Symbol::Variable("b" + to_string(i));
    Symbol fc = FullyConnected(
        i == 0 ? x : outputs[i - 1],  // data
        weights[i],
        biases[i],
        layers[i]);
    outputs[i] = i == layers.size() - 1 ? fc : Activation(fc, ActivationActType::kRelu);
}

return SoftmaxOutput(outputs.back(), label);

}

int main(int argc, char** argv)
{
const int image_size = 28;
const vector layers{ 128, 64, 10 };
const int batch_size = 100;
const int max_epoch = 10;
const float learning_rate = 0.1;
const float weight_decay = 1e-2;

auto train_iter = MXDataIter("MNISTIter")
    .SetParam("image", "../data/train-images.idx3-ubyte")
    .SetParam("label", "../data/train-labels.idx1-ubyte")
    .SetParam("batch_size", batch_size)
    .SetParam("flat", 1)
    .CreateDataIter();
auto val_iter = MXDataIter("MNISTIter")
    .SetParam("image", "../data/t10k-images.idx3-ubyte")
    .SetParam("label", "../data/t10k-labels.idx1-ubyte")
    .SetParam("batch_size", batch_size)
    .SetParam("flat", 1)
    .CreateDataIter();

auto net = mlp(layers);

Context ctx = Context::cpu();  // Use CPU for training
//Context ctx = Context::gpu();

std::map<string, NDArray> args;
args["X"] = NDArray(Shape(batch_size, image_size*image_size), ctx);
args["label"] = NDArray(Shape(batch_size), ctx);
// Let MXNet infer shapes other parameters such as weights
net.InferArgsMap(ctx, &args, args);

// Initialize all parameters with uniform distribution U(-0.01, 0.01)
auto initializer = Uniform(0.01);
for (auto& arg : args)
{
    // arg.first is parameter name, and arg.second is the value
    initializer(arg.first, &arg.second);
}

// Create sgd optimizer
Optimizer* opt = OptimizerRegistry::Find("sgd");
opt->SetParam("rescale_grad", 1.0 / batch_size)
    ->SetParam("lr", learning_rate)
    ->SetParam("wd", weight_decay);

// Create executor by binding parameters to the model
auto *exec = net.SimpleBind(ctx, args);
auto arg_names = net.ListArguments();

// Start training
for (int iter = 0; iter < max_epoch; ++iter)
{
    int samples = 0;
    train_iter.Reset();

    auto tic = chrono::system_clock::now();
    while (train_iter.Next())
    {
        samples += batch_size;
        auto data_batch = train_iter.GetDataBatch();
        // Set data and label
        data_batch.data.CopyTo(&args["X"]);
        data_batch.label.CopyTo(&args["label"]);

        // Compute gradients
        exec->Forward(true);
        exec->Backward();
        // Update parameters
        for (size_t i = 0; i < arg_names.size(); ++i)
        {
            if (arg_names[i] == "X" || arg_names[i] == "label") continue;
            opt->Update(i, exec->arg_arrays[i], exec->grad_arrays[i]);
        }
    }
    auto toc = chrono::system_clock::now();

    Accuracy acc;
    val_iter.Reset();
    while (val_iter.Next())
    {
        auto data_batch = val_iter.GetDataBatch();
        data_batch.data.CopyTo(&args["X"]);
        data_batch.label.CopyTo(&args["label"]);
        // Forward pass is enough as no gradient is needed when evaluating
        exec->Forward(false);
        acc.Update(data_batch.label, exec->outputs[0]);
    }
    float duration = chrono::duration_cast<chrono::milliseconds>(toc - tic).count() / 1000.0;
    LG << "Epoch: " << iter << " " << samples / duration << " samples/sec Accuracy: " << acc.Get();
}

delete exec;
MXNotifyShutdown();

return 0;

}
##CMakelists.txt
project(mxnet_cpp_test)
cmake_minimum_required(VERSION 2.8)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -std=c++11 -W")

include_directories(
${CMAKE_CURRENT_SOURCE_DIR}/../3rParty/mxnet/inc/
${CMAKE_CURRENT_SOURCE_DIR}/../3rParty/dmlc-core/include/dmlc
${CMAKE_CURRENT_SOURCE_DIR}/../3rParty/mxnet/inc/cpp-package/include
)

link_directories(${CMAKE_CURRENT_SOURCE_DIR}/../3rdParty/mxnet/lib/)

add_executable(mxnet_cpp_test mxnet_cpp_test.cpp)
target_link_libraries(
mxnet_cpp_test
${CMAKE_CURRENT_SOURCE_DIR}/../3rdParty/mxnet/lib/libmxnet.so)

Whatever I use cmaklists.txt or write the makefile directly, the results are the same. I don't know how to fix this issue.

@kalyc
Copy link
Contributor

kalyc commented Sep 13, 2018

Thanks for submitting the issue @s0302102
@mxnet-label-bot[C++, Build]

@marcoabreu marcoabreu added Build C++ Related to C++ labels Sep 13, 2018
@leleamol
Copy link
Contributor

@s0302102
The code that is posted with this issue has some syntax errors that would have caused compilation issues. For example:

  1. The #include in first line does not specify specific header files.
  2. In the function definition "Symbol mlp(const vector &layers)", the datatype for vector is not specified. Changing it to "Symbol mlp(const std::vector &layers)" would help.
  3. In the implementation of "mlp()" function, explicitly qualifying "vector" with "std::vector" would also help.

I tried above fixes and the code worked with the latest code in mxnet repository. I also had to correct the paths such as ../data/train-images.idx3-ubyte for my setup.

You can also refer to "mlp_cpu.cpp" example in "https://github.com/apache/incubator-mxnet/tree/master/cpp-package/example" folder

@s0302102
Copy link
Author

s0302102 commented Sep 14, 2018

@leleamol Thanks very much for your comment.
First let me say sorry for the code error. As I used linux to compile the code but windows to submit the issue, something went wrong when I pasted from linux to windwos via the vnc viewer. The first two questions you mentioned above are as below respectively:

  1. #include <chrono>
  2. vector <int>
    As for the third question, the using namespace std; was used at the beginning of the code, it should be OK.
    I suppose the problem is not the example code itself, but the include order or something else maybe. The code itself didn't give any errors when I compiled it.
    By the way, I also use the newest version of the mxnet and I can compile the mxnet correctly (with .a file about 400M and .so file about 300M).

///////
After I submitted this comment, I find that the "chrono" and "int" (with "<" and "<" original )are missing again. That was fixed by adding "\" before "<" and ">". Sorry for that as I seldom use the Markdown.

@leleamol
Copy link
Contributor

@s0302102 I was able to build latest version of mxnet and examples correctly.
Can you try building examples in cpp-package/examples directory as mentioned here (https://github.com/apache/incubator-mxnet/blob/master/cpp-package/README.md)
?
The error in the first post seems to be from compiler. Will you be able to post the complete error message you obtained when you built your example?

For reference: Here is an sample of compile command for compiling an example (test_example.cpp) in cpp-package/example directory.

g++ -std=c++0x -DMSHADOW_FORCE_STREAM -Wall -Wsign-compare -O3 -DNDEBUG=1 -I/mxnet/3rdparty/mshadow/ -I/mxnet/3rdparty/dmlc-core/include -fPIC -I/mxnet/3rdparty/tvm/nnvm/include -I/mxnet/3rdparty/dlpack/include -I/mxnet/3rdparty/tvm/include -Iinclude -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -Wno-unused-local-typedefs -msse3 -mf16c -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMSHADOW_USE_PASCAL=0 -DMXNET_USE_OPENCV=1 -I/usr/include/opencv -fopenmp -DMXNET_USE_OPERATOR_TUNING=1 -DMXNET_USE_LAPACK -DMXNET_USE_NCCL=0 -DMXNET_USE_LIBJPEG_TURBO=0 -Icpp-package/include -Ibuild/cpp-package/include -MM -MT cpp-package/example/mlp cpp-package/example/mlp.cpp

The header file dependency for this example is

cpp-package/example/mlp: cpp-package/example/mlp.cpp
cpp-package/include/mxnet-cpp/MxNetCpp.h
cpp-package/include/mxnet-cpp/executor.hpp
cpp-package/include/mxnet-cpp/executor.h
cpp-package/include/mxnet-cpp/base.h include/mxnet/c_api.h
/mxnet/3rdparty/tvm/nnvm/include/nnvm/c_api.h
cpp-package/include/mxnet-cpp/symbol.h
cpp-package/include/mxnet-cpp/ndarray.h
cpp-package/include/mxnet-cpp/shape.h
cpp-package/include/mxnet-cpp/op_map.h
/mxnet/3rdparty/dmlc-core/include/dmlc/logging.h
/mxnet/3rdparty/dmlc-core/include/dmlc/./base.h
cpp-package/include/mxnet-cpp/optimizer.h
cpp-package/include/mxnet-cpp/lr_scheduler.h
cpp-package/include/mxnet-cpp/symbol.hpp
cpp-package/include/mxnet-cpp/op_suppl.h
cpp-package/include/mxnet-cpp/operator.h
cpp-package/include/mxnet-cpp/ndarray.hpp
cpp-package/include/mxnet-cpp/monitor.hpp
cpp-package/include/mxnet-cpp/monitor.h
cpp-package/include/mxnet-cpp/operator.hpp
cpp-package/include/mxnet-cpp/optimizer.hpp
cpp-package/include/mxnet-cpp/op.h
cpp-package/include/mxnet-cpp/op_util.h
/mxnet/3rdparty/dmlc-core/include/dmlc/optional.h
/mxnet/3rdparty/dmlc-core/include/dmlc/./common.h
/mxnet/3rdparty/dmlc-core/include/dmlc/././logging.h
/mxnet/3rdparty/dmlc-core/include/dmlc/./logging.h
/mxnet/3rdparty/dmlc-core/include/dmlc/./type_traits.h
/mxnet/3rdparty/dmlc-core/include/dmlc/././base.h
cpp-package/include/mxnet-cpp/kvstore.hpp
cpp-package/include/mxnet-cpp/kvstore.h
cpp-package/include/mxnet-cpp/io.hpp cpp-package/include/mxnet-cpp/io.h
cpp-package/include/mxnet-cpp/metric.h
cpp-package/include/mxnet-cpp/initializer.h

@nswamy
Copy link
Member

nswamy commented Oct 11, 2018

@s0302102 could you please verify as suggested by @leleamol and close issue if it is not an issue anymore.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Build C++ Related to C++
Projects
None yet
Development

No branches or pull requests

5 participants