You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SpMM message passing CUDA support for coalesced COO graphs (#617)
* Enhance CUDA support by updating adjacency_matrix and propagate functions for COO graphs
* Swap edge encoding order in coalesce to fix CUDA.jl issue
* Update comments to clarify coalesce behavior
* Add custom _adjacency_matrix for propagate CUDA COO graphs
- Leave public adjacency_matrix interface uniform, always returning a sparse adjacency_matrix
- Implement custom _adjacency_matrix for propagate copy_xj for CUDA COO graphs, converting to dense when more efficient
* Fix imports
* Update GPU compatibility checks for COO CUDA
* Add @non_differentiable annotation to _adjacency_matrix function
* Add tests for coalesced COO graphs
* Remove debug statements
Copy file name to clipboardExpand all lines: GNNGraphs/src/transform.jl
+3-2Lines changed: 3 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ end
148
148
"""
149
149
coalesce(g::GNNGraph; aggr=+)
150
150
151
-
Return a new GNNGraph where all multiple edges between the same pair of nodes are merged (using aggr for edge weights and features), and the edge indices are sorted lexicographically (by source, then target).
151
+
Return a new GNNGraph where all multiple edges between the same pair of nodes are merged (using aggr for edge weights and features), and the edge indices are sorted lexicographically (by target, then by source).
152
152
This method is only applicable to graphs of type `:coo`.
153
153
154
154
`aggr` can take value `+`,`min`, `max` or `mean`.
@@ -158,7 +158,8 @@ function Base.coalesce(g::GNNGraph{<:COO_T}; aggr = +)
158
158
w =get_edge_weight(g)
159
159
edata = g.edata
160
160
num_edges = g.num_edges
161
-
idxs, idxmax =edge_encoding(s, t, g.num_nodes)
161
+
# order by target first and then source as a workaround of CUDA.jl issue: https://github.com/JuliaGPU/CUDA.jl/issues/2820
0 commit comments