Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging recent commits to wip eigen branch #48

Merged
merged 10 commits into from
Apr 24, 2022
18 changes: 2 additions & 16 deletions .github/ISSUE_TEMPLATE/documentation_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,29 +7,15 @@ assignees: ''

---

## Report incorrect documentation
## Report incorrect or missing documentation

**Location of incorrect documentation**
**Location of documentation**
Provide links and line numbers if applicable.

**Describe the problems or issues found in the documentation**
A clear and concise description of what you found to be incorrect.

**Steps taken to verify documentation is incorrect**
List any steps you have taken:

**Suggested fix for documentation**
Detail proposed changes to fix the documentation if you have any.

---

## Report needed documentation

**Report needed documentation**
A clear and concise description of what documentation you believe it is needed and why.

**Describe the documentation you'd like**
A clear and concise description of what you want to happen.

**Steps taken to search for needed documentation**
List any steps you have taken:
17 changes: 0 additions & 17 deletions .github/ISSUE_TEMPLATE/enhancement-request.md

This file was deleted.

24 changes: 16 additions & 8 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,22 @@ assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I wish I could use cuGraph to do [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe the solution you'd like and any additional context**

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context, code examples, or references to existing implementations about the feature request here.



---
_For Developers below this line_

- [] Code passes CI
- [] Code uses Graph Primitives
- [] Code in C++/CUDA layer
- [] Code in C layer
- [] Code in pylibcugraph Python layer
- [] Code in cugraph Python layer
- [] Documentation
- [] Test cases at each layer (for MG tests, a note in the PR description indicating the new/existing MG tests were run and passed)

2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ assignees: ''

---

Ask a question that could be converted into a feature or enhancement
Ask a question
10 changes: 8 additions & 2 deletions cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -135,16 +135,19 @@ rapids_cpm_init()
# following public header-only raft dependencies:
# * RMM
# * Thrust
# * libcu++
# * GTest/GMock
# * cuCollections
#
# The CMakeLists.txt for each of these projects are properly configured
# to generate a build and install export-set, so reimplementing finding or
# fetching those targets in cuGraph is redundant (at best), and potentially
# error-prone if something about those targets change and our implementation
# lags behind.
###

# Putting this before raft to override RAFT from pulling them in.
include(cmake/thirdparty/get_libcudacxx.cmake)
include(cmake/thirdparty/get_cuco.cmake)

include(cmake/thirdparty/get_raft.cmake)
include(cmake/thirdparty/get_libcugraphops.cmake)

Expand Down Expand Up @@ -290,6 +293,7 @@ target_link_libraries(cugraph
PUBLIC
cugraph-ops::cugraph-ops++
raft::raft
cuco::cuco
PRIVATE
cugraph::cuHornet
NCCL::NCCL
Expand Down Expand Up @@ -405,6 +409,8 @@ target_link_libraries(cugraph_c
CUDA::curand
CUDA::cusolver
CUDA::cusparse
raft::raft
cuco::cuco
PRIVATE
cugraph::cugraph
)
Expand Down
34 changes: 34 additions & 0 deletions cpp/cmake/thirdparty/get_cuco.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
#=============================================================================
# Copyright (c) 2021-2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#=============================================================================

function(find_and_configure_cuco VERSION)

rapids_cpm_find(cuco ${VERSION}
GLOBAL_TARGETS cuco::cuco
BUILD_EXPORT_SET cugraph-exports
CPM_ARGS
EXCLUDE_FROM_ALL TRUE
GIT_REPOSITORY https://github.com/NVIDIA/cuCollections.git
GIT_TAG 0ca860b824f5dc22cf8a41f09912e62e11f07d82
OPTIONS "BUILD_TESTS OFF"
"BUILD_BENCHMARKS OFF"
"BUILD_EXAMPLES OFF"
)

endfunction()

# cuCollections doesn't have a version yet
find_and_configure_cuco(0.0)
24 changes: 24 additions & 0 deletions cpp/cmake/thirdparty/get_libcudacxx.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# =============================================================================
# Copyright (c) 2020-2022, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
# in compliance with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License
# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing permissions and limitations under
# the License.
# =============================================================================

# This function finds libcudacxx and sets any additional necessary environment variables.
function(find_and_configure_libcudacxx)
include(${rapids-cmake-dir}/cpm/libcudacxx.cmake)

rapids_cpm_libcudacxx(BUILD_EXPORT_SET cugraph-exports
INSTALL_EXPORT_SET cugraph-exports)

endfunction()

find_and_configure_libcudacxx()
2 changes: 2 additions & 0 deletions cpp/cmake/thirdparty/get_raft.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -32,13 +32,15 @@ function(find_and_configure_raft)
BUILD_EXPORT_SET cugraph-exports
INSTALL_EXPORT_SET cugraph-exports
CPM_ARGS
EXCLUDE_FROM_ALL TRUE
GIT_REPOSITORY https://github.com/${PKG_FORK}/raft.git
GIT_TAG ${PKG_PINNED_TAG}
SOURCE_SUBDIR cpp
OPTIONS
"RAFT_COMPILE_LIBRARIES OFF"
"BUILD_TESTS OFF"
"BUILD_BENCH OFF"
"RAFT_ENABLE_cuco_DEPENDENCY OFF"
)

if(raft_ADDED)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,7 @@ void copy_v_transform_reduce_nbr(raft::handle_t const& handle,

auto execution_policy = handle.get_thrust_policy();
if constexpr (GraphViewType::is_multi_gpu) {
minor_tmp_buffer.fill(minor_init, handle.get_stream());
minor_tmp_buffer.fill(handle, minor_init);
} else {
thrust::fill(execution_policy,
vertex_value_output_first,
Expand Down
71 changes: 29 additions & 42 deletions cpp/include/cugraph/prims/count_if_v.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,24 @@

namespace cugraph {

namespace detail {

template <typename vertex_t, typename VertexValueInputIterator, typename VertexOp>
struct count_if_call_v_op_t {
vertex_t local_vertex_partition_range_first{};
VertexValueInputIterator vertex_value_input_first{};
VertexOp v_op{};

__device__ bool operator()(vertex_t i)
{
return v_op(local_vertex_partition_range_first + i, *(vertex_value_input_first + i))
? vertex_t{1}
: vertex_t{0};
}
};

} // namespace detail

/**
* @brief Count the number of vertices that satisfies the given predicate.
*
Expand All @@ -42,8 +60,8 @@ namespace cugraph {
* @param vertex_value_input_first Iterator pointing to the vertex properties for the first
* (inclusive) vertex (assigned to this process in multi-GPU). `vertex_value_input_last` (exclusive)
* is deduced as @p vertex_value_input_first + @p graph_view.local_vertex_partition_range_size().
* @param v_op Unary operator takes *(@p vertex_value_input_first + i) (where i is [0, @p
* graph_view.local_vertex_partition_range_size())) and returns true if this vertex should be
* @param v_op Binary operator takes vertex ID and *(@p vertex_value_input_first + i) (where i is
* [0, @p graph_view.local_vertex_partition_range_size())) and returns true if this vertex should be
* included in the returned count.
* @return GraphViewType::vertex_type Number of times @p v_op returned true.
*/
Expand All @@ -53,47 +71,16 @@ typename GraphViewType::vertex_type count_if_v(raft::handle_t const& handle,
VertexValueInputIterator vertex_value_input_first,
VertexOp v_op)
{
auto count =
thrust::count_if(handle.get_thrust_policy(),
vertex_value_input_first,
vertex_value_input_first + graph_view.local_vertex_partition_range_size(),
v_op);
if (GraphViewType::is_multi_gpu) {
count =
host_scalar_allreduce(handle.get_comms(), count, raft::comms::op_t::SUM, handle.get_stream());
}
return count;
}
using vertex_t = typename GraphViewType::vertex_type;

/**
* @brief Count the number of vertices that satisfies the given predicate.
*
* This version (conceptually) iterates over only a subset of the graph vertices. This function
* actually works as thrust::count_if() on [@p input_first, @p input_last) (followed by
* inter-process reduction in multi-GPU). @p input_last - @p input_first (or the sum of @p
* input_last - @p input_first values in multi-GPU) should not overflow GraphViewType::vertex_type.
*
* @tparam GraphViewType Type of the passed non-owning graph object.
* @tparam InputIterator Type of the iterator for input values.
* @tparam VertexOp VertexOp Type of the unary predicate operator.
* @param handle RAFT handle object to encapsulate resources (e.g. CUDA stream, communicator, and
* handles to various CUDA libraries) to run graph algorithms.
* @param graph_view Non-owning graph object.
* @param input_first Iterator pointing to the beginning (inclusive) of the values to be passed to
* @p v_op.
* @param input_last Iterator pointing to the end (exclusive) of the values to be passed to @p v_op.
* @param v_op Unary operator takes *(@p input_first + i) (where i is [0, @p input_last - @p
* input_first)) and returns true if this vertex should be included in the returned count.
* @return GraphViewType::vertex_type Number of times @p v_op returned true.
*/
template <typename GraphViewType, typename InputIterator, typename VertexOp>
typename GraphViewType::vertex_type count_if_v(raft::handle_t const& handle,
GraphViewType const& graph_view,
InputIterator input_first,
InputIterator input_last,
VertexOp v_op)
{
auto count = thrust::count_if(handle.get_thrust_policy(), input_first, input_last, v_op);
auto it = thrust::make_transform_iterator(
thrust::make_counting_iterator(vertex_t{0}),
detail::count_if_call_v_op_t<vertex_t, VertexValueInputIterator, VertexOp>{
graph_view.local_vertex_partition_range_first(), vertex_value_input_first, v_op});
auto count = thrust::reduce(handle.get_thrust_policy(),
it,
it + graph_view.local_vertex_partition_range_size(),
vertex_t{0});
if (GraphViewType::is_multi_gpu) {
count =
host_scalar_allreduce(handle.get_comms(), count, raft::comms::op_t::SUM, handle.get_stream());
Expand Down
22 changes: 14 additions & 8 deletions cpp/include/cugraph/prims/edge_partition_src_dst_property.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -254,10 +254,12 @@ class edge_partition_major_property_t {
edge_partition_major_value_start_offsets_ = std::nullopt;
}

void fill(T value, rmm::cuda_stream_view stream)
void fill(raft::handle_t const& handle, T value)
{
thrust::fill(
rmm::exec_policy(stream), value_data(), value_data() + size_dataframe_buffer(buffer_), value);
thrust::fill(handle.get_thrust_policy(),
value_data(),
value_data() + size_dataframe_buffer(buffer_),
value);
}

auto key_first() { return key_first_; }
Expand All @@ -267,6 +269,7 @@ class edge_partition_major_property_t {
(*edge_partition_key_offsets_).back())
: std::nullopt;
}

auto value_data() { return get_dataframe_buffer_begin(buffer_); }

auto device_view() const
Expand Down Expand Up @@ -351,14 +354,17 @@ class edge_partition_minor_property_t {
shrink_to_fit_dataframe_buffer(buffer_, handle.get_stream());
}

void fill(T value, rmm::cuda_stream_view stream)
void fill(raft::handle_t const& handle, T value)
{
thrust::fill(
rmm::exec_policy(stream), value_data(), value_data() + size_dataframe_buffer(buffer_), value);
thrust::fill(handle.get_thrust_policy(),
value_data(),
value_data() + size_dataframe_buffer(buffer_),
value);
}

auto key_first() { return key_first_; }
auto key_last() { return key_last_; }

auto value_data() { return get_dataframe_buffer_begin(buffer_); }

auto device_view() const
Expand Down Expand Up @@ -480,7 +486,7 @@ class edge_partition_src_property_t {

void clear(raft::handle_t const& handle) { property_.clear(handle); }

void fill(T value, rmm::cuda_stream_view stream) { property_.fill(value, stream); }
void fill(raft::handle_t const& handle, T value) { property_.fill(handle, value); }

auto key_first() { return property_.key_first(); }
auto key_last() { return property_.key_last(); }
Expand Down Expand Up @@ -561,7 +567,7 @@ class edge_partition_dst_property_t {

void clear(raft::handle_t const& handle) { property_.clear(handle); }

void fill(T value, rmm::cuda_stream_view stream) { property_.fill(value, stream); }
void fill(raft::handle_t const& handle, T value) { property_.fill(handle, value); }

auto key_first() { return property_.key_first(); }
auto key_last() { return property_.key_last(); }
Expand Down
4 changes: 2 additions & 2 deletions cpp/include/cugraph/prims/property_op_utils.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ struct property_op<thrust::tuple<Args...>, Op>

private:
template <typename T, std::size_t... Is>
__host__ __device__ constexpr auto sum_impl(T& t1, T& t2, std::index_sequence<Is...>)
__host__ __device__ constexpr auto binary_op_impl(T& t1, T& t2, std::index_sequence<Is...>)
{
return thrust::make_tuple((Op<typename thrust::tuple_element<Is, Type>::type>()(
thrust::get<Is>(t1), thrust::get<Is>(t2)))...);
Expand All @@ -200,7 +200,7 @@ struct property_op<thrust::tuple<Args...>, Op>
public:
__host__ __device__ constexpr auto operator()(const Type& t1, const Type& t2)
{
return sum_impl(t1, t2, std::make_index_sequence<thrust::tuple_size<Type>::value>());
return binary_op_impl(t1, t2, std::make_index_sequence<thrust::tuple_size<Type>::value>());
}
};

Expand Down
Loading