Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

每个 Pass 的 Test 中,各个 node 的 cost 不一致。 #229

Closed
FrankRouter opened this issue Sep 5, 2017 · 1 comment
Closed

每个 Pass 的 Test 中,各个 node 的 cost 不一致。 #229

FrankRouter opened this issue Sep 5, 2017 · 1 comment

Comments

@FrankRouter
Copy link

日志

Tue Sep  5 15:16:57 2017[1,37]<stdout>:Test with Pass 2, Cost 4.149080, {}
Tue Sep  5 15:17:00 2017[1,34]<stdout>:Test with Pass 2, Cost 4.421460, {}
Tue Sep  5 15:17:02 2017[1,46]<stdout>:Test with Pass 2, Cost 4.200185, {}
Tue Sep  5 15:17:08 2017[1,39]<stdout>:Test with Pass 2, Cost 4.398165, {}
Tue Sep  5 15:17:09 2017[1,17]<stdout>:Test with Pass 2, Cost 4.678599, {}

网络配置

def fc_net(dict_dim, class_dim=2):
    """
    dnn network definition

    :param dict_dim: size of word dictionary
    :type input_dim: int
    :params class_dim: number of instance class
    :type class_dim: int
    """

    # input layers
    data = paddle.layer.data("word", paddle.data_type.sparse_binary_vector(dict_dim))
    label = paddle.layer.data("label", paddle.data_type.dense_vector(1))

    # hidden
    h_size = 128
    h = paddle.layer.fc(
        input=data,
        size=h_size,
        act=paddle.activation.Tanh())

    # output layer
    output = paddle.layer.fc(
        input=h,
        size=1,
        act=paddle.activation.Linear())

    cost = paddle.layer.smooth_l1_cost(input=output, label=label)

    return cost, output, label

训练参数

    parameters = paddle.parameters.create(cost)
    # create optimizer
    adagrad_optimizer = paddle.optimizer.DecayedAdaGrad(
        learning_rate=0.01,
        regularization=paddle.optimizer.L2Regularization(rate=0.01),
        rho=0.95,
        epsilon=1e-6,
     )

    # create trainer
    trainer = paddle.trainer.SGD(
        cost=cost,
        parameters=parameters,
        update_equation=adagrad_optimizer
    )
@FrankRouter
Copy link
Author

这个 issue 不适合在这个 repo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant