Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support heterogenous fanout type #4608

Open
wants to merge 123 commits into
base: branch-24.12
Choose a base branch
from

Conversation

jnke2016
Copy link
Contributor

@jnke2016 jnke2016 commented Aug 13, 2024

closes #4589
closes #4591

Copy link
Collaborator

@ChuckHastings ChuckHastings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some thoughts on changing the API a bit.

raft::random::RngState& rng_state,
bool return_hops,
bool with_replacement = true,
prior_sources_behavior_t prior_sources_behavior = prior_sources_behavior_t::DEFAULT,
bool dedupe_sources = false,
bool do_expensive_check = false);

#if 0
/* FIXME:
There are two options to support heterogeneous fanout
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's another option to explore.

Create a new function called neighbor_sample. Create it off of the biased sampling API, but with the following changes:

  1. the biases become optional instead of required. Then it can do either uniform or biased in the same call just by whether the biases are included or not
  2. the fanout and heterogeneous fanout as you have defined. Or we might explore using std::variant, where it would either take host_span or tuple of host span and make the right choice internally
  3. Move the rng_state parameter to be right after the handle (before the graph_view). This feels like a better standard place for the parameter.

We can then mark the existing uniform_neighbor_sample and biased_neighbor_sample as deprecated. When we implement, the internal C++ implementation can just call the new neighbor_sample with the parameters properly configured. This makes it a non-breaking change (eventually we'll drop the old functions), but still keeps the code reuse increased.

Thoughts @seunghwak ?

Copy link
Contributor

@seunghwak seunghwak Aug 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. the biases become optional instead of required. Then it can do either uniform or biased in the same call just by whether the biases are included or not

=> In this case, we may update the existing non-heterogeneous fanout type sampling functions as well. i.e. combine the uniform & biased sampling functions. Not sure about the optimal balancing point between creating too many functions vs creating a function with too many input parameters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah... I guess we should avoid creating a too busy function (one function handling all different types of sampling based on the input arguments excessively using std::variant & std::optional) but we should also avoid creating too many functions... Not sure what's the optimal balancing point...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, adding new parameters exponentially increase code complexity (too handle all possible combinations of optional parameters), we should better create separate functions. If supporting an additional optional parameter requires only a minor change in the API and implementation, we may create one generic function (or we may create one complex function that handles all different options in the detail namespace and multiple public functions calling this if this helps in reducing code replication).

cpp/include/cugraph_c/sampling_algorithms.h Outdated Show resolved Hide resolved
cpp/include/cugraph_c/sampling_algorithms.h Outdated Show resolved Hide resolved
cpp/include/cugraph_c/sampling_algorithms.h Outdated Show resolved Hide resolved
@@ -368,6 +410,7 @@ cugraph_error_code_t cugraph_uniform_neighbor_sample(
const cugraph_type_erased_device_array_view_t* label_to_comm_rank,
const cugraph_type_erased_device_array_view_t* label_offsets,
const cugraph_type_erased_host_array_view_t* fan_out,
const cugraph_sample_heterogeneous_fanout_t* heterogeneous_fanout,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we take the same approach here. Create a new C API function called neighbor_sample, following the biased function definition. Add this parameter. Deprecate the other functions. In the implementation we can just check for nullptr (NULL).

@@ -150,7 +173,7 @@ neighbor_sample_impl(

std::vector<size_t> level_sizes{};
int32_t hop{0};
for (auto&& k_level : fan_out) {
for (auto&& k_level : (*fan_out)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't actually sufficient yet... but I'm more worried about the API right now.

This loop will need, in the case of heterogeneous sampling, to have 2 levels of for loop. An outer loop iterating by hop and an inner loop iterating by type.

I'd be inclined to add a setup loop that iterates over the types and generates the masks - and perhaps identifies the maximum number of hops to drive the outer loop. You'll need to get k_level from the right type/hop combination... so this for construct won't work at all, it will need to look different.

Copy link
Contributor Author

@jnke2016 jnke2016 Aug 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right I only added it for it to compile. I will revisit this approach once we lock the API's interface. It is only supporting non heterogeneous type for now

@@ -192,7 +215,7 @@ neighbor_sample_impl(
if (labels) { (*level_result_label_vectors).push_back(std::move(*labels)); }

++hop;
if (hop < fan_out.size()) {
if (hop < (*fan_out).size()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fan_out size will (potentially) vary by type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right I only added it for it to compile. I will revisit this approach once we lock the API's interface. It is only supporting non heterogeneous type for now

# FIXME: Add expensive check to ensure all dict values are lists
# Convert to a tuple of sequence (edge type size and fanout values)
edge_type_size = []
[edge_type_size.append(len(s)) for s in list(fanout_vals.values())]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this iterate over the edge types in the dictionary in order? We need to make sure that this is constructed with edge type 0 first, followed by edge type 1, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I converted the heterogeneous fanout type to a sorted ordered dictionary.

edge_type_size = []
[edge_type_size.append(len(s)) for s in list(fanout_vals.values())]
edge_type_fanout_vals = list(chain.from_iterable(list(fanout_vals.values())))
fanout_vals = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per my earlier suggestions, I think we want this to be a CSR structure, so converting from a list of sizes to a list of offsets is perhaps best done here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We changed this back to a dense structure... so I think this code isn't right.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still seems wrong to me. If you want to support fanout_vals as a dictionary I think we need to convert it to a dense array to get the right values. Do you have a python test for this path that we can verify?

@@ -314,8 +316,21 @@ def uniform_neighbor_sample(
fanout_vals = fanout_vals.get().astype("int32")
elif isinstance(fanout_vals, cudf.Series):
fanout_vals = fanout_vals.values_host.astype("int32")
elif isinstance(fanout_vals, dict):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments as above

@github-actions github-actions bot added the CMake label Aug 20, 2024
cpp/include/cugraph/sampling_functions.hpp Outdated Show resolved Hide resolved
cpp/include/cugraph/sampling_functions.hpp Outdated Show resolved Hide resolved
cpp/include/cugraph/sampling_functions.hpp Outdated Show resolved Hide resolved
cpp/include/cugraph/sampling_functions.hpp Show resolved Hide resolved
cpp/include/cugraph_c/sampling_algorithms.h Outdated Show resolved Hide resolved
cpp/src/c_api/neighbor_sampling.cpp Outdated Show resolved Hide resolved
cpp/src/c_api/neighbor_sampling.cpp Outdated Show resolved Hide resolved
cpp/src/c_api/neighbor_sampling.cpp Outdated Show resolved Hide resolved
cpp/src/c_api/neighbor_sampling.cpp Outdated Show resolved Hide resolved
cpp/src/c_api/neighbor_sampling.cpp Outdated Show resolved Hide resolved
Copy link
Collaborator

@ChuckHastings ChuckHastings left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small change...

cpp/include/cugraph/detail/utility_wrappers.hpp Outdated Show resolved Hide resolved
Copy link
Contributor

@seunghwak seunghwak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 1

raft::handle_t const& handle,
rmm::device_uvector<vertex_t>&& vertices,
rmm::device_uvector<value0_t>&& values_0,
rmm::device_uvector<value1_t>&& values_1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about this function.

We have shuffle_values that work for arbitrary value types and we have additional functions for commonly used type combinations defined in this header file (and explicitly instantiated for reuse in multiple places).

This function works just for value0_t = float or double and value1_t = int32_t.

I have few suggestions.

  1. Just use shuffle_values if you don't think you will call this function in multiple places for the same type combination.
  2. More explicit about what values_0 and values1_1 are. For example, we are using weights, edge_id, and edge_weights for shuffle functions for edges. Just seeing this declaration, callers might be misled that this function will work for arbitrary value0_t and value1_t.
  3. At the very minimum, we need to document what type combinations are supported.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this function was added so that we didn't need to convert a big .cpp file to a .cu file just to shuffle values.

My suggestion would be to use option 2. If we later find other uses for this function we can revisit this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is no longer used anywhere though so I can remove it for now. The only reason I added it was to shuffle the triplet vertices, labels and rank but after discussing with @ChuckHastings we don't need to shuffle the latter. Should I just remove this function for now since there is no use case?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If we don't need the function I would delete it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the function since it is unused

void transform_increment(rmm::cuda_stream_view const& stream_view,
raft::device_span<value_t> d_span,
value_t value);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar here. I am not sure creating a thrust wrapper for arbitrary types is a good idea or not.

For commonly used types, we can clearly cut compile time and binary size by doing this.

In such case, I am inclined to better naming functions to indicate the supported types or at least properly document the supported types.

For example, for the sort function here,

  1. We may rename the function to sort_vertices or at least sort_ints to indicate that this works only for integers and document the supported integer types (e.g. int32_t, int64_t). If we explicitly instantiated this function for floating point numbers as well, then we may create sort_floats as well.
  2. Or at the very minimum, we need to document the supported types.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And our general convention is to pass stream as the last parameter.

Here, we are passing handle in some functions and stream in other functions. And passing stream as the last parameter when we are passing stream.

Better be consistent. I think we should pass stream as the last parameter consistently for the functions defined in this header file to allow calling these functions in multi-stream executions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ChuckHastings Any thoughts on this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stream should be the last parameter, I think. We should do some review of the code and identify other places where we should be passing stream instead of handle. I think passing the handle into the algorithm is great, since it gives us access to everything. But I had to do some complex things in MTMG to get some of the lower level functions working in a multi-stream environment because we use the handle too much. I think we should look at many of the non-public functions and explore passing the comms object and stream instead of passing the handle.

Regarding these wrappers for thrust calls, I think we'll end up with higher quality code if we have function names that are more precise about what we're doing. I think sort_ints might be sufficiently precise... I imagine there are other integer data types that we would want to sort.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went for option 1

*
*/
template <typename value_t>
void sort(raft::handle_t const& handle, raft::device_span<value_t> d_span);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

raft::device_span<value_t> d_span I guess span here really does not convey any additional information. The type already specifies that this is a raft::device_span. Naming this variable as span really does not provided any additional information. It's like saying int32_t integer. Better rename this (e.g. values).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might have been my old code that Joseph's modifying to use a span... so that naming issue is probably my fault.

std::optional<raft::device_span<int32_t const>> label_to_output_comm_rank,
raft::host_span<int32_t const> fan_out,
sampling_flags_t sampling_flags,
bool do_expensive_check = false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A tedious thing... but I think it is more natural to list homogeneous sampling functions first.

This header file declares the sampling functions in the order of (heterogeneous, uniform), (heterogeneous, biased), (homogeneous, uniform), and (homogeneous, biased). Better list more basic sampling functions first.

* offsets), identifying the randomly selected edges. src is the source vertex, dst is the
* destination vertex, weight (optional) is the edge weight, edge_id (optional) identifies the edge
* id, edge_type (optional) identifies the edge type, hop identifies which hop the edge was
* encountered in. The offsets array (optional) identifies the offset for each label.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better state that the size of the src, dst, weight, edge_id, edge_type, or hop vector is # sampled edges while the size of the offsets vector is # labels + 1.

* If @p starting_vertex_offsets is not specified then no organization is applied to the output, the
* offsets values in the return set will be std::nullopt.
*
* If @p starting_vertex_offsets is specified the offsets array will be populated. The offsets array
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have starting_vertex_offsets anymore.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better update the documentation as well.

* @tparam edge_t Type of edge identifiers. Needs to be an integral type.
* @tparam weight_t Type of edge weights. Needs to be a floating point type.
* @tparam edge_type_t Type of edge type. Needs to be an integral type.
* @tparam bias_t Type of bias. Needs to be an integral type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No bias_t in this function

* @param starting_vertex_labels Optional device span of labels associated with each starting vertex
* for the sampling.
* @param label_to_output_comm_rank Optional device span identifying which rank should get each
* vertex label. This should be the same on each rank.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should get each vertex label=>should get sampling outputs of each vertex label?

* level. The fanout value at hop x is given by the expression 'fanout[x*num_edge_types +
* edge_type_id]'
* @param num_edge_types Number of edge types where a value of 1 translates to homogeneous neighbor
* sample whereas a value greater than 1 translates to heterogeneous neighbor sample.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if the user passes 1 here? Throw an exception and asks to call the homogeneous version instead? Or internally call the homogeneous version? Or take the heterogeneous code path (which might be less efficient)?

Copy link
Contributor

@seunghwak seunghwak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 2/3

cpp/include/cugraph_c/sampling_algorithms.h Show resolved Hide resolved
namespace cugraph {
namespace detail {

rmm::device_uvector<int32_t> convert_starting_vertex_offsets_to_labels(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

starting_vertex_offsets=>starting_vertex_label_offsets?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And should we really define this function and create one additional layer of indirection? What this function does is just calling expand_sparse_offsets so why not just directly call expand_sparse_offsets?

Copy link
Contributor Author

@jnke2016 jnke2016 Nov 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right we can directly call expand_sparse_offsets but I believe an additional layer of indirection was added for a better description of the operation being performed which is converting the starting_vertex_label_offsets to labels. And this is done through the expand_sparse_offsets method. @ChuckHastings any comment on this?

label_t{0},
thrust::maximum<label_t>());

label_map.resize(max_label + 1, handle.get_stream());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rmm::device_uvector<int32_t> label_map(0, handle.get_stream());
...
label_map.resize(max_label + 1, handle.get_stream());

=>

rmm::device_uvector<int32_t> label_map(max_label + 1, handle.get_stream());

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And shouldn't the caller already know # labels? Should we really compute this here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we can infer the max/number of labels by looking at the size of the first device array in label_to_output_comm_rank. max_label = std::get<0>(label_to_output_comm_rank) - 1 or

rmm::device_uvector<int32_t> label_map(label_to_output_comm_rank, handle.get_stream());. @ChuckHastings is it a safe assumption to make?

cpp/src/sampling/detail/conversion_utilities_impl.cuh Outdated Show resolved Hide resolved
starting_vertex_labels ? std::make_optional(std::vector<rmm::device_uvector<label_t>>{})
: std::nullopt;

level_result_src_vectors.reserve((fan_out).size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(fanout).size()=>fanout.size()

if (weights) { (*level_result_weight_vectors).push_back(std::move(*weights)); }
if (edge_ids) { (*level_result_edge_id_vectors).push_back(std::move(*edge_ids)); }
if (edge_types) { (*level_result_edge_type_vectors).push_back(std::move(*edge_types)); }
if (labels) { (*level_result_label_vectors).push_back(std::move(*labels)); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to detach edge mask here?

Copy link
Contributor

@seunghwak seunghwak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review part 3/3

int32_t num_edge_types{1};
bool flag_replacement{true};

bool check_correctness{true};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not testing with edge masking, and is this because we currently don't allow attaching two masks?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that's the case, better add a FIXME statement. Once we add a primitive to support heterogeneous sampling, we won't need to attach two masks (or collapse two masks to one).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and is this because we currently don't allow attaching two masks?

Right. Iff you recall , I briefly mentioned that in one of our 1 on 1 few weeks ago. I am adding a fixme

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a fixme

mg_graph_view,
std::optional<raft::device_span<vertex_t const>>{std::nullopt},
rng_state,
// 20,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Delete commented out code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants