Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support MKL2017 DNN API With New Branch #23

Merged
merged 12 commits into from
Sep 26, 2016
Merged

Support MKL2017 DNN API With New Branch #23

merged 12 commits into from
Sep 26, 2016

Conversation

i8run
Copy link
Contributor

@i8run i8run commented Sep 26, 2016

No description provided.

@i8run i8run merged commit 425fced into intel-analytics:MKL2017 Sep 26, 2016
@jason-dai
Copy link
Contributor

  • We should define several possible engines (e.g., Scala and MKL), and then create modules using factory methods based on the current engine. In this way, the user can specify different engines on the runtime without changing the code.
  • Each Module needs to have two methods: inputLayout and outputLayout, which return the input/gradOutput layout accepted by the module, and the layout of gradInput/output generated by the module.
    • For all MKL modules, these two methods will return MKL
    • For all Scala modules, these two methods will return Scala (except for Sequentialas shown below)
      • For Sequential, inputLayout is the same as its first module, and outputLayout is the same as its last module
  • Each MKL module needs to have two variables: prevEngine and nextEngine, which specify the engines for the modules immediately before and after it. At forward time, the MKL module needs to convert its input if prevEngine is Scala, and convert output if nextEngine is Scala; and similarly it will perform conversion at backward time if either prevEngine or nextEngine is Scala.
    • The MKL Concat module will also need to perform conversion for each Scala module it contains (at both forward and backward time)
    • As an optimization, the Scala Concat module will perform input/gradInput conversion for all the MKL modules it contains (so that input only needs to be converted once).
  • Each module also needs to have a initEngine(inputLayout, outputLayout) method; after a model is constructed, one needs to first call model.initEngine(Scala, Scalal) before training.
    • For all MKL modules, initEngine will set the prevEngine and nextEngine accordingly; MKL Concat module will also call module(i).initEngine(MKL, MKL) for each of its modules
    • initEngine is a no-op for all Scala modules except containers (as shown below).
      • For Sequential, it will call module(i).initEngine(module(i-1).outputLayout, module(i+1).inputLayout) for each of its modules
      • For Concat, it will call module(i).initEngine(module(i).inputLayout, Scala) (as it will perform input/gradInput conversion itself)
  • We can define a MKLTensor which, in addition to the normal dimention/size information, also contains the MKL layout and a convert method to convert it back to a DenseTensor; we should only allow a limited subset of the operations in MKLTensor (e.g., array access or elementwise access); we can then use MKLTensor for buffers with MKL layout.
    • In particular, it is possible for all MKL modules to use MKLTensor for its parameters and gradParameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants