Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added heatmap #206

Merged
merged 4 commits into from
Oct 11, 2023
Merged

added heatmap #206

merged 4 commits into from
Oct 11, 2023

Conversation

Schefflera-Arboricola
Copy link
Member

@MridulS
Copy link
Member

MridulS commented Oct 5, 2023

Hmm, interesting. It seems like there is no speedups on your machine while running the timing task. Maybe something you can investigate? @Schefflera-Arboricola

@Schefflera-Arboricola
Copy link
Member Author

Schefflera-Arboricola commented Oct 7, 2023

@MridulS sir,

  1. In the CPU core utilization window, the work was not distributed uniformly among all the 8 cores, most of the work was on one core at a time while the script was running.

  2. The CPU% was around 100, and the number of threads for the process was 17(rarely become 18 or 20). It was the same when I executed currFun for the standard and the parallel graph separately.

  3. I even tried it for 1000 nodes graphs but I was still getting most of the values in the heatmap around 1.

  4. I recently pulled some new changes and I was getting this error:

betweenness = dict.fromkeys(G, 0.0)  # b[v]=0 for v in G
                  ^^^^^^^^^^^^^^^^^^^^^
TypeError: 'ParallelGraph' object is not iterable

So I added this code to interface.py:

class ParallelGraph:
    __networkx_plugin__ = "parallel"

    def __init__(self, graph_object):
        self.graph_object = graph_object

    def is_multigraph(self):
        return self.graph_object.is_multigraph()

    def is_directed(self):
        return self.graph_object.is_directed()
    
+    def __getitem__(self, node):
+       if node in self.graph_object:
+            return list(self.graph_object.neighbors(node))  
+        else:
+           raise KeyError(f"Node {node} not found in the graph.")

+    def __iter__(self):
+        return iter(self.graph_object.nodes())
+    
+    def __len__(self):
+        return len(self.graph_object)

but I am still getting a similar heatmap.

Running on: MacBook Air(macOS: 13.3.1(a))

Could you please help me figure out what might be the issue here, or redirect me to some resource where I could learn more about it(parallel algorithms and their benchmarking, etc.)?

Also, if a cell in the heatmap corresponds to a graph with a particular number of nodes and a particular edge probability, then why have we used 0.5 instead of p here?

Thank you very much for your patience :)

@Schefflera-Arboricola
Copy link
Member Author

@MridulS sir, I think it was happening because I didn't run pip install -e ".[developer]" after setting up the environment, and I also didn't get that TypeError now. The additional code in interface.py was not required. I have updated the heat map.

But I am still doubtful about this :

Also, if a cell in the heatmap corresponds to a graph with a particular number of nodes and a particular edge probability, then why have we used 0.5 instead of p here?

I got this map for p
Screenshot 2023-10-08 at 7 55 00 PM

and this for 0.5
Screenshot 2023-10-08 at 7 56 20 PM

Thank you :)

@MridulS
Copy link
Member

MridulS commented Oct 9, 2023

Also, if a cell in the heatmap corresponds to a graph with a particular number of nodes and a particular edge probability, then why have we used 0.5 instead of p here?

Yes that looks like a bug, it should be using p to create the random graph. Good catch!

Could you send a PR to the nx-parallel repo to fix it?

@Schefflera-Arboricola
Copy link
Member Author

@MridulS done.
see here

@MridulS
Copy link
Member

MridulS commented Oct 11, 2023

thanks!

@MridulS MridulS merged commit 4fbc5ea into networkx:main Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants