You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the unicorn notebook and specially in the prepare_graph function you call nodes.keys and index function twice and those are expensive calls that result in the prepare_graph call taking over 2 hours on a very strong machine
edge_index = [[], []]
for src, dst in edges:
src_index = list(nodes.keys()).index(src)
dst_index = list(nodes.keys()).index(dst)
edge_index[0].append(src_index)
edge_index[1].append(dst_index)
an alternative would be to precompute all the indencies and store them in a hashmap and compute the graph in a few seconds
for example:
node_index_map = {node: i for i, node in enumerate(nodes.keys())}
for src, dst in tqdm(edges):
src_index = node_index_map[src]
dst_index = node_index_map[dst]
edge_index[0].append(src_index)
edge_index[1].append(dst_index)
The text was updated successfully, but these errors were encountered:
In the unicorn notebook and specially in the prepare_graph function you call nodes.keys and index function twice and those are expensive calls that result in the prepare_graph call taking over 2 hours on a very strong machine
an alternative would be to precompute all the indencies and store them in a hashmap and compute the graph in a few seconds
for example:
The text was updated successfully, but these errors were encountered: