Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon No.9】为 Paddle 新增 Laplace API #183

Closed
wants to merge 1 commit into from
Closed

【Hackathon No.9】为 Paddle 新增 Laplace API #183

wants to merge 1 commit into from

Conversation

lwbmowgli
Copy link

添加拉普拉斯分布设计方案

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


v_liwenbo06 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link

paddle-bot bot commented Jul 12, 2022

你的PR提交成功,感谢你对开源项目的贡献!
请检查PR提交格式和内容是否完备,具体请参考示例模版
Your PR has been submitted. Thanks for your contribution!
Please check its format and content. For this, you can refer to Template and Demo.

@paddle-bot
Copy link

paddle-bot bot commented Jul 12, 2022

很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨社区贡献指南,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。
Sorry to inform you that through our discussion, your PR fails to meet the merging standard (Reference: Paddle API Design Standard Doc). You can also submit an new one. Thank you.


# 二、飞桨现状

- 目前 飞桨没有 API `paddle.distribution.Laplace`,但是有API`paddle.distribution.Normal(loc, scale, name=None)`paddle.distribution.Laplace的开发代码风格主要参考此API
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

代码风格可以参考除NormalCategoricalUniform外的其余分布,如Multinomial,这几个API目前是存量的,代码整体风及API签名与其它API有些diff,稍后我会补充到任务要求中。

- `loc (int|float|list|numpy.ndarray|Tensor)` - 正态分布平均值。数据类型为`int、float、list、numpy.ndarray`或`Tensor`。

- `scale (int|float|list|numpy.ndarray|Tensor)` 正态分布标准差。数据类型为`int、float、list、numpy.ndarray`或`Tensor`。

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

数据类型支持标量和Tensor类型即可。


## 不同点

- TensorFlow 的 API 有`KL_DIVERGENCE` (KL 散度)`cross_entropy`交叉熵, PyTorch 没有。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Torch的KL散度计算在https://github.com/pytorch/pytorch/blob/master/torch/distributions/kl.py, Paddle设计思路与Torch一致,请至少注册 laplace分布与laplace分布KL散度计算逻辑https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/distribution/kl.py , 有兴趣也可以额外实现laplace与其它分布之间的KL散度计算逻辑,作为加分项

paddle.distribution.Laplace(loc, scale, name=None)
```

注:其中参数名使用 `name` 为了与 Paddle 其他 API 参数名保持一致。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name可以不作为参数

- `log_prob`对数概率密度;
- `entropy` 熵计算;
- `cdf` 累积分布函数(Cumulative Distribution Function)
- `icdf` 逆累积分布函数
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

请从理论上掌握/推导每个函数的计算方法,并以公式/伪代码方式体现;Paddle/PyTroch/TF API体系不完全一致,请结合Paddle的API体系给出每个函数实现的关键思路、使用到的API或伪代码


测试考虑的 case 如下:

- 调用API的各种方法,能够产生正确的结果。
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

测试用例至少覆盖到参数检查、控制流、不同的边界条件、以及是否有显著的显存、性能问题,生成样本的正确性(思考如何测试Laplace采样结果真实满足Laplace分布),根据以上要求补充具体测试用例。

@Ligoml Ligoml changed the title add laplace rfcs 【Hackathon No.9】为 Paddle 新增 Laplace API Jul 19, 2022
@sunzhongkai588
Copy link
Contributor

重新提交提案 #190
此pr关闭

@Ligoml Ligoml closed this Jul 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants