-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mapper Lifting (Graph to Hypergraph) #48
base: main
Are you sure you want to change the base?
Conversation
Created Necessary files for ICML challenge.
Commented the MapperCover and MapperLifting Classes. Fixed verify_params functions, transpose issues. Removed empty covers from MapperCover. Unsure why all my files in git were unmodified but also modified... in my git status
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
I'm just making a comment to link all collaborators to the PR: Additionally, for sake of time we restricted to the case of constructing a hypergraph using 1-dimensional filters. However, this implementation can be extended for higher dimensional filters to lift graph to a larger hypergraph (or simplicial complex if we construct the nerve of the pullback). |
Hello @hfr1tz3! Thank you for your submission. As we near the end of the challenge, I am collecting participant info for the purpose of selecting and announcing winners. Please email me (or have one member of your team email me) at [email protected] so I can share access to the voting form. In your email, please include:
Before July 12, make sure that your submission respects all Submission Requirements laid out on the challenge page. Any submission that fails to meet this criteria will be automatically disqualified. |
In this PR, we implement a lifting from graphs to hypergraphs via the "Mapper on Graphs" construction as depicted in Figure 30 [1], which can be enriched with the structure of a combinatorial complex. It operates in the following way:
resolution
(number of intervals) andgain
(proportion overlap between neighboring intervals).The 1-skeleton of the nerve of the resulting cover would give the results of the classic Mapper algorithm for graph simplification with filter function$g$ .
By default, the function$g$ is the first projection of the graph Laplacian embedding of the unweighted edge adjacency matrix of $X$ , as this may be defined on all graphs and is known to capture topological information. However, we also have alternative filters that can be selected by the user (defined below). Some require that the graph $X$ be enriched with additional features. Users may define other filter functions $g$ dependent on what features their data contains in the form of a lambda function.
On the given Zinc dataset, we observe the following when applying default parameters.
Additionally, for sake of time we restricted to the case of constructing a hypergraph using 1-dimensional filters. However, this implementation can be extended for higher dimensional filters to lift graph to a larger hypergraph (or simplicial complex if we construct the nerve of the pullback).
The example filter functions$g$ which are implemented are the following:
"laplacian"
: Converts data to an undirected graph and then applies thetorch_geometric.transforms.AddLaplacianEigenvectorPE(k=1)
transform andprojects onto the smallest nonzero eigenvector.
"svd"
: Applies thetorch_geometric.transforms.SVDFeatureReduction(out_channels=1)
transform to the node feature matrix (ie.
torch_geometric.Data.data.x
)to project data to a 1-dimensional subspace.
"feature_pca"
: Appliestorch.pca_lowrank(q=1)
transform to node feature matrix(ie.
torch_geometric.Data.data.x
) and then projects to the 1st principal component."position_pca"
: Appliestorch.pca_lowrank(q=1)
transform to node position matrix(ie.
torch_geometric.Data.data.pos
) and then projects to the 1st principal component."feature_sum"
: Appliestorch.sum(dim=1)
to the node feature matrix in the graph(ie.
torch_geometric.Data.data.x
)."position_sum"
: Appliestorch.sum(dim=1)
to the node position matrix in the graph(ie.
torch_geometric.Data.data.pos
).You may also construct your own
filter_attr
andfilter_func
:"my_filter_attr"
:my_filter_func = lambda data : my_filter_func(data)
where
my_filter_func(data)
outputs a(n_sample, 1)
Tensor.Additionally, when calling the transform, set
filter_attr = "my_filter_attr"
filter_func = my_filter_func
[1] Hajij, M., Zamzmi, G., Papamarkou, T., Miolane, N., Guzmán-Sáenz, A., Ramamurthy, K. N., et al. (2022). Topological deep learning: Going beyond graph data. arXiv preprint arXiv:2206.00606.
This is joint work between Halley Fritze and Marissa Masden.