Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.20】为 Paddle 新增 Exponential 和 Gamma API -part #57899

Merged
merged 49 commits into from
Jan 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
b91f93c
add exponential
MayYouBeProsperous Sep 30, 2023
32c4130
add gamma distribution
MayYouBeProsperous Oct 7, 2023
e813442
refine docs
MayYouBeProsperous Oct 9, 2023
98f2468
add kl_divergence and test
MayYouBeProsperous Oct 9, 2023
143609e
resolve conflicts
MayYouBeProsperous Oct 9, 2023
f66bd5b
resolve conflicts
MayYouBeProsperous Oct 9, 2023
c4648c1
fix bug
MayYouBeProsperous Oct 9, 2023
319a4fa
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
MayYouBeProsperous Oct 10, 2023
5dfc996
refine test
MayYouBeProsperous Oct 10, 2023
d815a11
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
MayYouBeProsperous Oct 12, 2023
2f4cb60
fix test timeout
MayYouBeProsperous Oct 12, 2023
aaac78d
refine code
MayYouBeProsperous Oct 17, 2023
e5629e9
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
MayYouBeProsperous Nov 5, 2023
9f2504b
add standard_gamma kernel
MayYouBeProsperous Nov 15, 2023
86d9e50
fix comments
MayYouBeProsperous Nov 15, 2023
82f22e1
fix tests
MayYouBeProsperous Nov 15, 2023
c983d80
fix tests
MayYouBeProsperous Nov 15, 2023
6b7762f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
MayYouBeProsperous Nov 15, 2023
56f8e6f
fix comments
MayYouBeProsperous Nov 15, 2023
7efcb44
fix tests
MayYouBeProsperous Nov 16, 2023
79b1b44
fix gamma grad
MayYouBeProsperous Nov 17, 2023
7d37b9f
fix yaml
MayYouBeProsperous Nov 17, 2023
5f5c6b3
fix bugs
MayYouBeProsperous Nov 17, 2023
9ce0d26
fix tests
MayYouBeProsperous Nov 18, 2023
159db50
fix standard_gamma_grad
MayYouBeProsperous Dec 5, 2023
3ded65e
fix test
MayYouBeProsperous Dec 6, 2023
0cc68a4
fix test
MayYouBeProsperous Dec 6, 2023
26a6398
add cdf & icdf
MayYouBeProsperous Dec 6, 2023
7bb01c1
add cdf & icdf
MayYouBeProsperous Dec 6, 2023
15b0898
refine comments
MayYouBeProsperous Dec 6, 2023
a8c6a5c
fix
MayYouBeProsperous Dec 14, 2023
196067d
Merge branch 'develop' into dis
MayYouBeProsperous Dec 14, 2023
62680b8
fix
MayYouBeProsperous Dec 15, 2023
c42931f
fix head file
MayYouBeProsperous Dec 20, 2023
72266f9
fix
MayYouBeProsperous Dec 20, 2023
27f45ea
fix cuda op
MayYouBeProsperous Dec 20, 2023
c4cf2f9
fix
MayYouBeProsperous Dec 20, 2023
42198d7
fix
MayYouBeProsperous Dec 20, 2023
0f0e613
refine test
MayYouBeProsperous Dec 21, 2023
a5f6e37
fix test
MayYouBeProsperous Dec 21, 2023
4703cb4
refine comments
MayYouBeProsperous Dec 21, 2023
30c3fa2
fix comments
MayYouBeProsperous Dec 22, 2023
df74f48
fix
MayYouBeProsperous Dec 22, 2023
59d4968
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
MayYouBeProsperous Dec 23, 2023
7ac279d
fix
MayYouBeProsperous Dec 25, 2023
6c687c1
fix type check
MayYouBeProsperous Dec 26, 2023
e8c7dae
fix docs
MayYouBeProsperous Dec 27, 2023
a018a66
delete useless comments
MayYouBeProsperous Dec 28, 2023
075ef78
resolve conflict
MayYouBeProsperous Jan 2, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions paddle/phi/api/ext/tensor_compat.h
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ using experimental::split;
using experimental::sqrt;
using experimental::square;
using experimental::stack;
using experimental::standard_gamma;
using experimental::strided_slice;
using experimental::subtract;
using experimental::swish;
Expand Down
8 changes: 8 additions & 0 deletions paddle/phi/api/yaml/ops.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2542,6 +2542,14 @@
backward : stack_grad
interfaces : paddle::dialect::InferSymbolicShapeInterface

- op : standard_gamma
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

反向为什么又去掉了,按照这个实现方式,这个传入的tensor x 应该会有反向吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

暂不实现反向算子

args : (Tensor x)
output : Tensor(out)
infer_meta :
func : UnchangedInferMeta
kernel :
func : standard_gamma

- op : stanh
args : (Tensor x, float scale_a=0.67f, float scale_b=1.7159f)
output : Tensor(out)
Expand Down
82 changes: 0 additions & 82 deletions paddle/phi/kernels/cpu/dirichlet_kernel.cc
Original file line number Diff line number Diff line change
Expand Up @@ -13,90 +13,8 @@
// limitations under the License.

#include "paddle/phi/backends/cpu/cpu_context.h"
#include "paddle/phi/core/dense_tensor.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/cpu/elementwise.h"
#include "paddle/phi/kernels/funcs/elementwise_functor.h"
#include "paddle/phi/kernels/funcs/for_range.h"
#include "paddle/phi/kernels/funcs/reduce_function.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"
#include "paddle/phi/kernels/impl/dirichlet_kernel_impl.h"

namespace phi {

template <typename T, typename UniformSamplerT, typename NormalSamplerT>
struct GammaCPUFunctor {
GammaCPUFunctor(const T* alpha,
T* gamma,
BaseSampler<T, UniformSamplerT> uniform,
BaseSampler<T, NormalSamplerT> normal)
: alpha_(alpha), gamma_(gamma), uniform_(uniform), normal_(normal) {}

HOST void operator()(int64_t index) {
auto sample = sample_gamma<T, T, UniformSamplerT, NormalSamplerT>(
alpha_[index], uniform_, normal_);
gamma_[index] = std::max(std::numeric_limits<T>::min(), sample);
}

const T* alpha_;
T* gamma_;
BaseSampler<T, UniformSamplerT> uniform_;
BaseSampler<T, NormalSamplerT> normal_;
};

template <typename T>
struct DirichletSampler<CPUContext, T> {
void operator()(const CPUContext& dev_ctx,
const DenseTensor& alpha,
DenseTensor* out) {
auto generator = dev_ctx.GetGenerator()->GetCPUEngine();

auto uniform = [&generator]() -> T {
std::uniform_real_distribution<T> u(0.0, 1.0);
return u(*generator);
};
BaseSampler<T, decltype(uniform)> standard_uniform(uniform);

auto normal = [&generator]() {
std::normal_distribution<T> n(0.0, 1.0);
return n(*generator);
};
BaseSampler<T, decltype(normal)> standard_normal(normal);

// sample from K gamma distributions, where K=alpha.numel()
DenseTensor gamma_samples;
gamma_samples.Resize(alpha.dims());
dev_ctx.template Alloc<T>(&gamma_samples);

GammaCPUFunctor<T, decltype(uniform), decltype(normal)> gamma_functor(
alpha.data<T>(),
gamma_samples.data<T>(),
standard_uniform,
standard_normal);
funcs::ForRange<CPUContext> for_range(dev_ctx, alpha.numel());
for_range(gamma_functor);

// normalize them into a simplex, along the last axis
DenseTensor gamma_sum;
auto new_shape = gamma_samples.dims();
new_shape[new_shape.size() - 1] = 1;
gamma_sum.Resize(new_shape);
dev_ctx.template Alloc<T>(&gamma_sum);

funcs::ReduceKernelImpl<CPUContext, T, T, funcs::SumFunctor>(
dev_ctx,
gamma_samples,
&gamma_sum,
{new_shape.size() - 1},
true,
false);

funcs::ElementwiseCompute<funcs::DivideFunctor<T>, T>(
dev_ctx, gamma_samples, gamma_sum, funcs::DivideFunctor<T>(), out);
}
};

} // namespace phi

PD_REGISTER_KERNEL(
dirichlet, CPU, ALL_LAYOUT, phi::Dirichletkernel, float, double) {}
20 changes: 20 additions & 0 deletions paddle/phi/kernels/cpu/standard_gamma_kernel.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
// Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "paddle/phi/backends/cpu/cpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/standard_gamma_kernel_impl.h"

PD_REGISTER_KERNEL(
standard_gamma, CPU, ALL_LAYOUT, phi::StandardGammaKernel, float, double) {}
97 changes: 0 additions & 97 deletions paddle/phi/kernels/gpu/dirichlet_kernel.cu
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@


// Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
Expand All @@ -16,102 +14,7 @@

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/elementwise_divide_kernel.h"
#include "paddle/phi/kernels/funcs/broadcast_function.h"
#include "paddle/phi/kernels/funcs/elementwise_functor.h"
#include "paddle/phi/kernels/funcs/for_range.h"
#include "paddle/phi/kernels/funcs/reduce_function.h"
#include "paddle/phi/kernels/funcs/reduce_functor.h"
#include "paddle/phi/kernels/impl/dirichlet_kernel_impl.h"
#include "paddle/phi/kernels/reduce_sum_kernel.h"

#ifdef PADDLE_WITH_CUDA
#include <curand_kernel.h>
#endif
#ifdef PADDLE_WITH_HIP
#include <hiprand_kernel.h>
#endif

#if defined(PADDLE_WITH_CUDA)
using COMPAT_RANDSTATEPHILOX4_32_10_T = curandStatePhilox4_32_10_t;
#define COMPAT_RAND_INIT curand_init
#define COMPAT_RAND_UNIFORM curand_uniform
#define COMPAT_RAND_NORMAL curand_normal
#elif defined(PADDLE_WITH_HIP)
using COMPAT_RANDSTATEPHILOX4_32_10_T = hiprandStatePhilox4_32_10_t;
#define COMPAT_RAND_INIT hiprand_init
#define COMPAT_RAND_UNIFORM hiprand_uniform
#define COMPAT_RAND_NORMAL hiprand_normal
#endif

namespace phi {
template <typename T>
struct GammaCUDAFunctor {
GammaCUDAFunctor(const T* alpha, T* gamma, uint64_t seed, uint64_t offset)
: alpha_(alpha), gamma_(gamma), seed_(seed), offset_(offset) {}

DEVICE void operator()(int64_t index) {
// curand initialization
COMPAT_RANDSTATEPHILOX4_32_10_T state;
COMPAT_RAND_INIT(
/*seed=*/seed_, /*subsequence=*/index, /*offset=*/offset_, &state);

// sample
auto uniform_lambda = [&state]() { return COMPAT_RAND_UNIFORM(&state); };
BaseSampler<T, decltype(uniform_lambda)> standard_uniform(uniform_lambda);
auto normal_lambda = [&state]() { return COMPAT_RAND_NORMAL(&state); };
BaseSampler<T, decltype(normal_lambda)> standard_normal(normal_lambda);

auto sample =
sample_gamma<T, T, decltype(uniform_lambda), decltype(normal_lambda)>(
alpha_[index], standard_uniform, standard_normal);
gamma_[index] = std::max(std::numeric_limits<T>::min(), sample);
}

const T* alpha_;
T* gamma_;
const uint64_t seed_;
const uint64_t offset_;
};

template <typename T>
struct DirichletSampler<GPUContext, T> {
void operator()(const GPUContext& dev_ctx,
const DenseTensor& alpha,
DenseTensor* out) {
auto p_gen = dev_ctx.GetGenerator();
auto seed_and_offset = p_gen->IncrementOffset(10); // hard-coded offset
auto seed = seed_and_offset.first;
auto offset = seed_and_offset.second;

// sample from K gamma distributions, where K=alpha.numel()
DenseTensor gamma_samples;
gamma_samples.Resize(alpha.dims());
dev_ctx.template Alloc<T>(&gamma_samples);

GammaCUDAFunctor<T> gamma_functor(
alpha.data<T>(), gamma_samples.data<T>(), seed, offset);
funcs::ForRange<GPUContext> for_range(dev_ctx, out->numel());
for_range(gamma_functor);

// normalize them into a simplex, along the last axis
DenseTensor gamma_sum;
auto new_shape = gamma_samples.dims();
new_shape[new_shape.size() - 1] = 1;
gamma_sum.Resize(new_shape);
dev_ctx.template Alloc<T>(&gamma_sum);

phi::SumRawKernel<T, GPUContext>(dev_ctx,
gamma_samples,
{new_shape.size() - 1},
true,
false,
gamma_sum.dtype(),
&gamma_sum);
phi::DivideKernel<T, GPUContext>(dev_ctx, gamma_samples, gamma_sum, out);
}
};
} // namespace phi

PD_REGISTER_KERNEL(dirichlet,
GPU,
Expand Down
27 changes: 27 additions & 0 deletions paddle/phi/kernels/gpu/standard_gamma_kernel.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
/* Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/phi/backends/gpu/gpu_context.h"
#include "paddle/phi/backends/gpu/gpu_launch_config.h"
#include "paddle/phi/core/kernel_registry.h"
#include "paddle/phi/kernels/impl/standard_gamma_kernel_impl.h"

PD_REGISTER_KERNEL(standard_gamma,
GPU,
ALL_LAYOUT,
phi::StandardGammaKernel,
float,
double,
phi::dtype::float16,
phi::dtype::bfloat16) {}
Loading