Skip to content

Commit

Permalink
Convert some docstrings from char* to char[] (pytorch#13062)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: pytorch#13062

Gold (the linker) isn't able to gc unreferenced string constants, but
converting these to arrays puts them in their own data sections and reduces
(Android) binary size as a result.

I'm told even in server builds, this reduces binary size by a few dozen bytes
and speeds up startup by a few hundred ns. :-P

Reviewed By: Yangqing

Differential Revision: D10510808

fbshipit-source-id: 247ba9574e7a9b6a8204d33052994b08c401c197
  • Loading branch information
dreiss authored and facebook-github-bot committed Oct 24, 2018
1 parent 97b6a25 commit 0f5cee2
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion caffe2/operators/conv_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

namespace caffe2 {

const char* kConvDoc = R"DOC(
const char kConvDoc[] = R"DOC(
The Conv2D operator computes a 2D convolution operation over an input blob $(X)$, with a filter blob $(filter)$ and a bias blob $(bias)$, and outputs a single output blob $(Y)$. Although there are several options for order, the convention is that the input $(X)$ is a blob of shape $(N,C_{in},H_{in},W_{in})$ and the output $(Y)$ is a blob of shape $(N,C_{out},H_{out},W_{out})$. Here, $N$ is the batch size, $C$ is the number of channels, $H$ is the spatial height, and $W$ is the spatial width. For example, if your input data was a batch of five, 100x120pixel RGB images, $X$ would have shape $(5,3,120,100)$.
The $filter$ input blob may contain multiple filters and has shape $(M, C_{in}, K_H, K_W)$. Here, $M$ is the number of individual filters contained in the blob, $C_{in}$ is the number of channels of each filter (by convention in 2D convolution it is the same as the number of channels in the input), $K_H$ is the spatial height of the kernel, and $K_W$ is the spatial width of the kernel. The $bias$ blob is a vector of length $M$, where there is one bias for each filter in the $filter$ blob.
Expand Down
28 changes: 14 additions & 14 deletions caffe2/operators/elementwise_ops_schema.cc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ namespace caffe2 {

namespace {

const char* kBroadcastDoc = R"DOC(
const char kBroadcastDoc[] = R"DOC(
If necessary the right-hand-side argument will be broadcasted to match the
shape of left-hand-side argument. When broadcasting is specified, the second
tensor can either be of size 1 (a scalar value), or having its shape as a
Expand All @@ -31,7 +31,7 @@ Github Links:
)DOC";

const char* kAddExample = R"DOC(
const char kAddExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -77,7 +77,7 @@ print("C:", workspace.FetchBlob("C"))
)DOC";

const char* kSubExample = R"DOC(
const char kSubExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -123,7 +123,7 @@ print("C:", workspace.FetchBlob("C"))
)DOC";

const char* kMulExample = R"DOC(
const char kMulExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -169,7 +169,7 @@ print("C:", workspace.FetchBlob("C"))
)DOC";

const char* kDivExample = R"DOC(
const char kDivExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -357,7 +357,7 @@ For example, the following tensor shapes are supported:
"If broadcasting is disabled it should be of the same size.")
.Output(0, "C", "Result, has same dimensions and type as B");

const char* kLTExample = R"DOC(
const char kLTExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -396,7 +396,7 @@ C: [False False True False False True]
</details>
)DOC";

const char* kLEExample = R"DOC(
const char kLEExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -435,7 +435,7 @@ C: [ True False True True True True]
</details>
)DOC";

const char* kGTExample = R"DOC(
const char kGTExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -474,7 +474,7 @@ C: [False True False False False False]
</details>
)DOC";

const char* kGEExample = R"DOC(
const char kGEExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -513,7 +513,7 @@ C: [ True True False True True False]
</details>
)DOC";

const char* kEQExample = R"DOC(
const char kEQExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -550,7 +550,7 @@ C: [ True False False True True False]
</details>
)DOC";

const char* kNEExample = R"DOC(
const char kNEExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -650,7 +650,7 @@ CAFFE2_SCHEMA_FOR_BINARY_COMPARISON_OP(LE, "<=", "less or equal than", kLEExampl
CAFFE2_SCHEMA_FOR_BINARY_COMPARISON_OP(GT, ">", "greater than", kGTExample);
CAFFE2_SCHEMA_FOR_BINARY_COMPARISON_OP(GE, ">=", "greater or equal than", kGEExample);

const char* kAndExample = R"DOC(
const char kAndExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -698,7 +698,7 @@ print("C:", workspace.FetchBlob("C"))
</details>
)DOC";

const char* kOrExample = R"DOC(
const char kOrExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down Expand Up @@ -746,7 +746,7 @@ print("C:", workspace.FetchBlob("C"))
</details>
)DOC";

const char* kXorExample = R"DOC(
const char kXorExample[] = R"DOC(
<details>
<summary> <b>Example</b> </summary>
Expand Down
4 changes: 2 additions & 2 deletions caffe2/operators/pool_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -728,7 +728,7 @@ bool PoolOp<T, Context, PoolType>::RunOnDeviceWithOrderNHWC() {
}
return true;
}
const char* kAveragePoolDoc = R"DOC(
const char kAveragePoolDoc[] = R"DOC(
consumes an input blob and applies average pooling across the the blob according
to kernel sizes, stride sizes, pad lengths and dilation. Average pooling consists
of taking the average value of a subset of the input tensor according to the kernel
Expand Down Expand Up @@ -797,7 +797,7 @@ print("Y:\n", workspace.FetchBlob("Y"))
)DOC";

const char* kMaxPoolDoc = R"DOC(
const char kMaxPoolDoc[] = R"DOC(
consumes an input blob and applies max pooling across the the blob according to
kernel sizes, stride sizes, pad lengths and dilation. Max pooling consists of
taking the maximum value of a subset of the input tensor according to the kernel
Expand Down

0 comments on commit 0f5cee2

Please sign in to comment.