-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design Of Refactor Topology #1665
Design Of Refactor Topology #1665
Conversation
这项工作的背景是我们要使用代码生成器或者运行时自动生成模型配置函数,并在运行时自动检查配置的正确性。 | ||
|
||
|
||
现阶段如何编写一个Layer呢?可以参考[文章](http://www.paddlepaddle.org/doc/dev/new_layer/index.html)。主体可以分为一下几个步骤: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
一下 => 以下
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
message LayerConfig { | ||
required string name = 1; | ||
required string type = 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是否需要description字段,文字描述这个layer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的LayerConfig是每一个Layer实际的一些参数,例如某一个具体的fc_layer的size是多大,activation是啥。
Description写到了LayerDef里面,LayerDef是说一个FC Layer可以有哪些参数,在那里面加一个Description作为Layer的注释即可。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
明白了!
实现这项工作目前来看有如下几个先决条件需要解决: | ||
|
||
* 这项工作会修改 `Python <==> Paddle core`中间的protobuf消息定义,对于Python端Layer解析函数,需要有覆盖完整的单元测试,才能保证这一步工作进行完之后,系统行为没有问题。否则,直接修改 Protobuf 风险较高。 | ||
* `oneof`与`map`是`protobuf2`语法,但是这是在`Protobuf 3.0`之后的代码库中添加的功能,如果Paddle依赖这个功能,那么Paddle必须依赖Protobuf 3.0以上的Protobuf版本。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
看到有地方是oneof是比较晚的protobuf2版本才支持,这个有说明文章的链接么,了解下细节
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://developers.google.com/protocol-buffers/docs/proto#oneof
我记得官网之前有一些说明,但是现在我找不到了,你可以google再搜一下。。
map和oneof应该是protobuf2的library都不支持的,只有protobuf3的library支持。但是map和oneof是proto2的语法(Syntax)。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
嗯,好的,我刚才也是搜了下没找到,不过明确了就行了
* 阶段目的: 解耦合 Protobuf与Layer的C++实现 | ||
* 解决办法: 用`map`和`oneof`,将属性变成一个多种类型的字典 | ||
* 问题: | ||
* 需要先完善config_parser的单测,增加单测覆盖率 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
新旧两种方式又共存的可能性么,比如老的还在,新的重新在另外一个地方加入,但是parse的时候拼接在一起,然后逐步替换,还是说直接修改老的config_parser更加合理
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
真正实现的时候,肯定还是得一个Attribute一个Attribute的改(估计不是一个Layer一个Layer的改)。。所以会是一个渐近的过程,不会有突变。
其中,每种Layer都有不同的`type`。 而`attributes`作为一个`map`,他的Key可以被每个Layer自己定义。对于一些常见的配置参数,例如`activation`,可以共享一个key。对于一些layer专有属性,可以使用`.`分隔开。例如,对于CTCLayer可以设置`blank`属性,它的Key可以为`ctc.blank`。 | ||
|
||
这样,实现一个新的Layer,用户就不需要修改Protobuf消息了。并且,用户写一个新的Layer的时候,可以说明自己需要哪些属性,而这些属性的取值范围又是如何的。这样,我们在生成Python配置函数的代码时,可以生成运行时检查的代码。避免用户错误的配置神经网络。 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是用户在C++代码里面说明Layer的属性吗?
然后用户在Python配置Layer时获取这些属性,所以需要在C++中暴露出这个获取属性的api?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
参见第一个00.how_to_write_a_layer.md
|
||
* 最终目的: 用户只需要写Layer的C++实现,剩下的Python代码自动生成 | ||
* 阶段目的: 解耦合 Protobuf与Layer的C++实现 | ||
* 解决办法: 用`map`和`oneof`,将属性变成一个多种类型的字典 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个字典是否可以考虑直接用C++里面的map来做呢?即给每一个Layer定义一个map成员变量,用来描述属性,干脆移除proto中LayerConfig,Attribute这两个message。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
理论上当然可以。但是现实里会比较麻烦。
Protobuf在这里只是用来做多语言之间的通信协议。如果不使用这个通信协议,那就要直接调用C++的函数。对于比较复杂的消息类型,直接调用C++函数还挺麻烦的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我们需要支持动态图,那还需要确保构造模型的overhead不大。python的protobuf很慢,所以也可以考虑直接通过C++API来构造图是否可行。
|
||
基本想法: | ||
|
||
* 对于每一种类型的Layer,Paddle根据Layer的名字约定两个全局函数的名字。例如,对于FC Layer,全局函数的名字是 `__get_fc_layer_definition__` 和 `__get_fc_layer_output_definition__`。 这两个全局函数通过`REGISTER_LAYER`自动生成。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"这两个全局函数通过REGISTER_LAYER
自动生成",没太明白
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#define REGISTER_LAYER(name, cls)\
extern "C" void __get_#name#_layer_definition__(LayerDef& def) {\
cls::getLayerDefinition(def);\
}\
\
std::vector<LayerOutputType> __get_#name#_layer_output__(const std::vector<LayerOutputType>& inputs,\
LayerConfig& self) {\
return cls::getLayerOutputType(inputs, self);\
}\
\
static InitFunction initFunction([]{\
Layer::registerLayer(#name, cls);
})
void forward() { ... } | ||
void backward() { ... } | ||
|
||
static void getLayerDefinition(LayerDef& def) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
看起来流程是,cpp端也是构造了一个proto,来描述这个layer的性质(input/output),然后python get这个proto,再对应的构造出python class,如果都是proto作为属性,是不是都不需要python专门生成一段对应的代码了,而是类似v2的converter那样,把proto信息convert成函数定义,用户如果想查看某个layer的定义,也是直接解析这两个proto就行了。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不过在layer拼接的时候,应该有一些特殊的逻辑,这些信息需要表现在layer定义里边么
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
看起来流程是,cpp端也是构造了一个proto,来描述这个layer的性质(input/output),然后python get这个proto,再对应的构造出python class,如果都是proto作为属性,是不是都不需要python专门生成一段对应的代码了,而是类似v2的converter那样,把proto信息convert成函数定义,用户如果想查看某个layer的定义,也是直接解析这两个proto就行了。
对于Python这种动态类型语言,直接生成函数比代码生成器更简单。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不过在layer拼接的时候,应该有一些特殊的逻辑,这些信息需要表现在layer定义里边么
没有想好具体有哪些『特殊的逻辑』。如果有这个结构覆盖不到的问题,譬如recurrent_group
,可能定义到代码生成器里面也好?因为这个本身是一种语言一个最佳实践的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
对recurrent_group
来说,使用代码生成器是否会太晦涩,现在v2版本转的那块,已经挺难懂的了。
} | ||
|
||
// Each input of Paddle Layer should contain zero or one parameter. | ||
// so parameter_attr.size() == inputs.size() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parameter_attr.size() <= inputs.size()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parameter_attr.size() == inputs.size()
如果是这个输入不应该有参数,那就传一个空的ParameterDef。 inputs
和parameter_attr
两个数组是使用数组下标一一对应的。
|
||
// Define the layer's output types by given input types. | ||
message LayerOutputDef { | ||
// Output name, Each Paddle Layer could have multiple outputs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Paddle目前一个layer只有一个output。
我理解layer的input, output是相同的类型,ArgumentDef。 这里似乎只看到:
message LayerDef {
...
repeated ArgumentDef inputs = 3;
...
}
...
message LayerOutputDef {}
没有看到layer的output是啥类型?还是 ArgumentDef嘛?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Layer的输出并不在Layer的元信息中指定。
因为每一个Layer的输出形状是解析配置时计算的,而不是元信息可以规定的。所以在LayerDef里面没有outputs
} | ||
|
||
// Argument Define the Supported InputTypes. | ||
message ArgumentDef { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ArgumentDef 是否和 function/BufferArg(用来描述function的input/output)有一定的对应关系?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已线下和道远聊过。
结论
enum DataType {
Dense = 1;
Sparse = 2;
SparseBinary = 3;
Index = 4;
};
enum SequenceNestedLevel {
NO_SEQUENCE=0;
PLAIN_SEQUENCE = 1;
NESTED_SEQUENCE = 2;
};
InputType = DataType << 16 | SequenceNestedLevel
} | ||
} | ||
|
||
message LayerConfig { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里说是 "简单的示例代码", LayerConfig其实并不全吧? 最终会降 https://github.com/PaddlePaddle/Paddle/blob/develop/proto/ModelConfig.proto#L284 里所有的属性都以代码生成吗? 主要是指 LayerInputConfig, OperatorConfig等
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有道理。。LayerInputConfig,OperatorConfig我没考虑周全,那个是一个repeated所以不能这么搞。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已经添加实例
* 如何生成解析器是每个语言自定义的过程 | ||
* 这个过程可以是离线的过程。即先将所有Layer的LayerDef写入到一个文件里,然后其他语言读取这个文件,来生成代码。 | ||
* 这个过程同时也可以是在线的过程。比如对于Python这种动态类型语言,运行时生成函数比较简单,就没必要先生成代码,再生成函数了。 | ||
1. 使用ConfigParser,解析用户的配置文件`trainer_config.conf`。 | ||
* 这时,解析器只返回一个调用图,即Layer与Layer之间的调用关系,而不返回真正的`ModelConfig`。 | ||
* 这时,解析器只返回一个调用图,即Layer与Layer之间的调用关系(`Graph Protobuf`),而不返回真正的`ModelConfig`。 | ||
* 这个Graph Protobuf非常简单,只包括调用了哪个Layer,设置了那个Attribute即可 | ||
1. 讲这个调用图传递给Paddle Core,生成真正的`ModelConfig`。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
讲=>将
从调用图GraphProtobuf转换成真正的ModelConfig的具体过程是什么样的,是在哪一步转的啊,是直接把protobuf反序列化吗?
* 对于每一个Layer,顺序执行 `getLayerOutputDefinition`获得这个Layer的输出,传递给下一个Layer。 | ||
* 在C++端真正的生成每一个Layer的LayerConfig,在`getLayerOutputDefinition`中,用户可以对生成的LayerConfig进行修改。例如添加辅助输入,设置参数大小等等。 | ||
* 对于`GraphProtobuf`中每一个项目,生成每一个LayerConfig。 | ||
* 进而顺序执行 `getLayerOutputType`获得这个Layer的输出,并完成神经网络参数推导过程。再将这个LayerConfig传递给下一个Layer。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
推导的过程是在初始化GradientMachine时做的吗
"repeatable": true | ||
} | ||
], | ||
"parameter_attr": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以继承base layer 元信息吗?这样可以简化parameter_attr等说明。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这段protobuf是在Paddle C++部分生成的。所以,虽然这里有重复的信息,但是在C++部分可以使用同一个函数来生成Protobuf数据。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
知道了。如果只是在已定义的Layer里修改/增加一些参数,是否只需修改/增加相应元信息即可?
public: | ||
void init() { ... } | ||
void forward() { ... } | ||
void backward() { ... } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以定义loss layer, 它的gradient 是在python中计算后传入吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
有道理。
1. 根据所有Layer的元信息,LayerDefs生成解析器ConfigParser | ||
* 如何生成解析器是每个语言自定义的过程 | ||
* 这个过程可以是离线的过程。即先将所有Layer的LayerDef写入到一个文件里,然后其他语言读取这个文件,来生成代码。 | ||
* 这个过程同时也可以是在线的过程。比如对于Python这种动态类型语言,运行时生成函数比较简单,就没必要先生成代码,再生成函数了。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里生成config_parser是先于读取用户定义的网络结构的,那就是需要对所有的layer元信息都遍历一遍。
是否可以考虑先读取用户配置的网络结构,得知需要哪些layer的元信息,然后再生成config_parser。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
确实也是可以的。这个工作也可以是Lazy的。即用户使用某一个函数的时候,Python找不到这个函数,然后代码生成器再生成这个函数。
不过为了生成Sphinx文档方便,在解析配置之前生成config_parser可能更有道理。这样,Sphinx才能查找到所有支持的Layer,进而生成文档。
|
||
#### LayerOutputType | ||
|
||
* LayerOutputType 表示的是,某一个Layer输入输出具体是什么类型的(不是输入输出具体是什么值)。这是在运行时中计算出来的。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to consider the possibility of multiple outputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
目前的LstmStepLayer实际上是2个output。第二个output是通过另一个GetOutputLayer来获得的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同意需要支持多个output,在网络配置时能够支持取其中的某些层连接到不同的层~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
是的,所以getLayerOutputType
返回的是std::vector<LayerOutputType>
.addDoc("FC Layer is fully connected. Blah blah blah..."); | ||
} | ||
|
||
static std::vector<LayerOutputType> getLayerOutputType(const std::vector<LayerOutputDef>& inputs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to check whether inputs are consistent (e.g., whether dimensions and types match)
|
||
* 最终目的: 用户只需要写Layer的C++实现,剩下的Python代码自动生成 | ||
* 阶段目的: 解耦合 Protobuf与Layer的C++实现 | ||
* 解决办法: 用`map`和`oneof`,将属性变成一个多种类型的字典 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我们需要支持动态图,那还需要确保构造模型的overhead不大。python的protobuf很慢,所以也可以考虑直接通过C++API来构造图是否可行。
如果直接全部通过C++ API来构造图,那么之前 不过,目前来看,这个设计里把原来 另一方面,支持动态图的话,就是纯粹用Python驱动Paddle Core来做了。这个功能没有什么历史负担。到时候,我们可以直接用 @hedaoyuan 写的Function为基础,然后用全局变量(tape)来记录每一次调用的结果即可?到时候完全不需要 Protobuf来构造图了吧?直接在Python端维护一个图结构就好了。 |
可以加入Param 和 layer output之间转换的功能吗?例如,将Param转换成layer output, 然后进行操作或者输出到API的python代码中。 |
…neration' into feature/dynamic_net_doc
Feature/dynamic net doc
…_of_layer_code_generation
* 开发者如果想要新写一个Layer,需要修改多个文件。 | ||
* 首先,新写一个Layer,开发者需要在Paddle的[Protobuf文件](https://github.com/PaddlePaddle/Paddle/blob/develop/proto/ModelConfig.proto)中,添加这个Layer需要的参数。 | ||
* 其次,完成这个Layer需要的C/C++文件。完成这个Layer的前向后向代码。 | ||
* 最后完成这个Layer配置文件解析`config_parser.py`,`trainer_config_helpers`和`paddle.v2` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
新写一个layer要加上下面两点么:
- 写gpu的时候,要设置一个extern和inline的函数
- 好几种单测,test_layerGrad, python接口的单测,cpu和gpu核心函数的单测(test_matrixCompare)等
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这篇文档还是没有说明白问题是什么,以及应该如何解决问题呀?
@@ -0,0 +1,119 @@ | |||
# Topology Overview |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Topology Overview
==>
Design Doc: Add New Layers
to make the title of this document consistent with the title of this PR.
@@ -0,0 +1,119 @@ | |||
# Topology Overview | |||
Topology is a concept in Paddle for representing neural networks. A neural network contains one topology, which describes how layers connected to each other, and many parameters. The other deep learning frameworks may call this concept a computation graph, neural network configurations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this paragraph can be shortened as:
In PaddlePaddle, we represent a neural network by its topology and parameters. The topology is a directed graph of layers.
# Topology Overview | ||
Topology is a concept in Paddle for representing neural networks. A neural network contains one topology, which describes how layers connected to each other, and many parameters. The other deep learning frameworks may call this concept a computation graph, neural network configurations. | ||
|
||
The topology is not only an API level concept but also how we organize the computation codes for each `Layer` or `Function` in Paddle. The Paddle should maintain a dictionary from `Layer Type` to Layer implementation, e.g. from string `mul` to function `void tensor_multiply(Tensor& ins, Tensor& outs)'. The mechanism about how to manipulate topology by users, how Paddle maps user topology to implementations of `Layer` and `Function` is a fundamental problem for refactoring Paddle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这段话问题挺多,看不明白想说什么。
- “Topology不仅是。。。。”,然后应该接“而且是。。。”。但是没有接上。
- "API level concept" 这样的说法假大空,不知所指。
- Layer Type是什么?
- 后面的例子里,mul显然不是一个layer,不仅没有解释前面的话,而且引入新的疑问。
- 接下来“manipulate topology”又是指什么?
- 有一个`对应的`被写成了
'
这段话看上去想说很重要的东西。但是不仅没说明白这篇设计为什么能简化增加layer,甚至引入了一个新概念function。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- topology不仅仅是一个End Users看到的API接口,更重要的是,拓扑结构的设计也关乎到了Paddle的计算代码(Layer)如何组织
- Layer Type是一个字符串来表示Layer的类型,譬如一个fc_layer,他的type就是"fc"。
- Paddle总要维护一个
map<string, LayerCreator>
的一个映射,用来根据用户配置创建Layer对象。 manipulate topology
, 创建、修改一个拓扑结构。我改成create and modify
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
通篇读下来,不知道想解决什么问题。我努力猜了猜,好像是说每个layer或者function要增加一些配置参数的说明,比如某个参数的值域,以便unit test?
这事儿为什么紧急,需要现在做?
而且这事儿(增加一个机制,可以动态检查一个data member的值域)和layer以及function并不直接依赖,看上去可以写成一个独立的package。
doc/design/topology.md
Outdated
// implemetation here. | ||
} | ||
|
||
BEGIN_REGISTER_FUNCTION(cos, cos_kernel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么function不能用C++语法定义,而是要引如 BEGIN/END_REGISTER_FUNCTION 这么难读懂的方式?读者还得去grep BEGIN/END_REGISTER_FUNCTION 的定义,才能明白。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这不是定义『Function』的过程,而是『注册』Function或者Layer的过程。这里确实有点难懂。我也在想更好的表达方式。。不过看起来其他框架都用了类似的手法注册函数或者Layer。
|
||
### Kernel Developers | ||
|
||
Alan is a professional developer in CPU and GPU. He can write kernel functions of a new `Layer` with the best performance. However, he is not a familiar with Paddle API language, Python. Alan just needs to write the kernel function and register them in Paddle, and then Paddle should generate the user-side APIs for these kernel functions without any codes written by Alan. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个例子和“让增加layers变得更方便”有什么关系呢?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
新写Layer的开发者不需要写Python Code了。只需要写『如何计算』即可。
doc/design/topology.md
Outdated
|
||
BEGIN_REGISTER_FUNCTION(cos, cos_kernel) | ||
// The parameter of cos function. | ||
func.addAttribute("scale", "The scale of cos layer").defaultValue(1.0).largerThan(0.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func
是什么?上面没有定义。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个是宏定义里面生成的变量。不过看起来并不直观,我再想想怎么表示更好。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
感觉可以不需要func。让addAttribue(), addInput()是一个类的成员函数就可以了。BEGIN_REGISTER_FUNCTION()定义了一个类来做registration. registration就在这个类的构造函数完成。在END_REGISTRER_FUNCTION()那里声明一个这个类的instance。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果直接返回一个static的实例,应该就不需要END_REGISTRER_FUNCTION()
了吧?
Bob is a QA developer of Paddle. He wants to tests all Paddle supported `Function` and `Layer`. However, each layer has different configuration attributes, e.g. `scale` in `cosine` function. Each configuration attribute has different value range, data type. Bob should easily test all boundary conditions of one Layer or Functions by using new mechanism about topology. | ||
|
||
``` | ||
auto cos = function::Register("cos"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里为什么要用registerer?直接调用cos函数不行吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
不太行呀。。这就和层的register一样。用Register来管理Paddle所有支持的层和函数,这样其他语言就可以用这个函数了。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
同样也不是很明白这里的cos,如果是一个Function
对象,那这里的function::Register("cos")
应该是一个create?
这个事情非常紧急。Paddle目前使用上的大部分问题都需要完成这个重构来解决。可以解决的问题包括:
|
doc/design/topology.md
Outdated
|
||
BEGIN_REGISTER_FUNCTION(cos, cos_kernel) | ||
// The parameter of cos function. | ||
func.addAttribute("scale", "The scale of cos layer").defaultValue(1.0).largerThan(0.0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
感觉可以不需要func。让addAttribue(), addInput()是一个类的成员函数就可以了。BEGIN_REGISTER_FUNCTION()定义了一个类来做registration. registration就在这个类的构造函数完成。在END_REGISTRER_FUNCTION()那里声明一个这个类的instance。
doc/design/topology.md
Outdated
func.setShapeInferer([](std::vector<Dims>& ins, std::vector<Dims>& outs){ | ||
outs[0] = {ins[0][0], 1}; // output dimension = batch_size * 1 | ||
}); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
另外还需要有函数检查input, output的shape是否match。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个会交由Paddle本身来完成。用户只需要注册这个函数需要哪些shape的input,能够生成哪些shape的output,合法性检查是Paddle本身完成的。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
属性里面,可能还需要考虑一下输入参数和输出参数不定的情况,比如https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/function/ContextProjectionOp.cpp#L111
这里,ContextProjection的计算,input是一个参数或者两个参数都可以。
doc/design/topology.md
Outdated
// implemetation here. | ||
} | ||
|
||
BEGIN_REGISTER_FUNCTION(cos, cos_kernel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another possible way of achieving this is to make all of these part of Function (i,e. virtual member of Function class). Beside computation, a Function needs to also implement the interfaces for specifying inputs&outputs, shape inferer, shape checker, estimating flops etc.
Need to think which is a better way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
我觉得直接注册函数到元信息要比使用virtual member要好。原因有如下几个方面:
- 如果我们使用virtual member,我们必须强制开发者使用『继承』来实现自己的代码。
- 我们如果想要将Caffe或者Torch的某些Op 拿到Paddle中,就要重新写一个类。
- 如果用注册函数的方式,我们就可以直接把其他框架的函数注册进Paddle就好。举例:
#include "torch/some_ops.h" REGISTER_TORCH_OPS("torch_something", some_ops);
- 如果使用virtual member,BaseFunction里面会有很多接口。
- 这些接口子类不一定会全部实现。譬如,estimating flops的函数如果不实现只是做不了benchmark,不影响计算本身。但使用注册,可以对某些OP不注册这个"estimating flops"的函数就好。
- 这些接口很难一次想清楚。BaseFunction很有可能在开发过程中,再增加新的接口。改一个基类会影响很多子类。
使用注册函数的方式,好处就是kernel developer确实只用实现一个计算kernel,然后用一种手法注册到Paddle中就好。简化kernel developer的思想负担。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果,这些参数,类型等信息描述在Function类型里面,那么可以用一个REGISTER_FUNCTION来实现。如果,不是描述在Function里面,建议用两个REGISTER_宏来分别描述参数类型,和Kernel函数。
一个典型的场景是卷积函数,卷积函数有很多种实现方式,sgemm、direct、fft等等,还有基于不同的Library的实现,比如,基于CUDNN,MKL,NNPACK等等。为了清晰,这些实现方式应该分别封装成不同名字的Kernel函数,但是这些实现作为卷积运算的输入/输出参数,类型都是一样的。
所以,如果是基于FunctionBase的方式,可以实现一个ConvFunctionBase包含这些参数推导/检查信息,其他的MKLConvFunction,CUDNNConvFunction,FFTConvFunction等直接继承ConvFunctionBase就可以了。
如果,用文档中的方式的话,需要考虑把BEGIN_REGISTER_FUNCTION(cos, cos_kernel)
中的REGISTER(cos)和REGISTER(cos_kernel)区分出来,否则对于卷积这种,每个实现都需要重新写一遍相同的参数信息。
# Topology Overview | ||
Topology is a concept in Paddle for representing neural networks. A neural network contains one topology, which describes how layers connected to each other, and many parameters. The other deep learning frameworks may call this concept a computation graph, neural network configurations. | ||
|
||
The topology is not only an API level concept but also how we organize the computation codes for each `Layer` or `Function` in Paddle. The Paddle should maintain a dictionary from `Layer Type` to Layer implementation, e.g. from string `mul` to function `void tensor_multiply(Tensor& ins, Tensor& outs)'. The mechanism about how to manipulate topology by users, how Paddle maps user topology to implementations of `Layer` and `Function` is a fundamental problem for refactoring Paddle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make Function, Layer, Projection the same thing in the new design.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make Layer and Projection are the same thing is simple, but Function didn't contain backward
method.
感谢您给PaddlePaddle贡献代码。由于Paddle V1/V2版本已不再维护,相关代码也已从develop分支上删除,因此关闭您的PR,欢迎您向Paddle最新版-Fluid贡献代码。 |
这个设计讨论了我们将如何重构Paddle的网络配置解析工作。
为了排版更好,请看这里