You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
let's say we are using GraphSumEmbedding as a layer in TGN.
As I understood from Your paper in the propagation part, we are supposed to concatenate embeddings from the previous layer of our neighbors and from the previous layer ourselves as nodes.
So for example, if we want to calculate an embedding on layer 2 of a node with id "a" whose neighbors are nodes with id: "b", "c", "d", we need to calculate embeddings of neighbors "b", "c", and "d" on layer 1 and we also need embedding of node "a" on layer 1.
But from code it doesn't look to me that way, could be I am wrong, but You are always sending source_node_features to the aggregate function, meaning You are always using features on layer 0 of source nodes: memory + raw_features:
This is a problem since when you want to calculate embeddings on layer 2 You will be using neighbor_embeddings on layer 1, but source_node_features from layer 0.
Why is that so:
because You are setting source_node_features at the beginning and not changing variable:
You are correct. In practice, it does not make much of a difference since we're mostly using only 1 graph layer with TGN, but this is indeed incorrect and will lead to a different result if using 2 layers or more.
Would you be willing to open up a pull request for this?
Hi,
let's say we are using GraphSumEmbedding as a layer in TGN.
As I understood from Your paper in the propagation part, we are supposed to concatenate embeddings from the previous layer of our neighbors and from the previous layer ourselves as nodes.
So for example, if we want to calculate an embedding on layer 2 of a node with id "a" whose neighbors are nodes with id: "b", "c", "d", we need to calculate embeddings of neighbors "b", "c", and "d" on layer 1 and we also need embedding of node "a" on layer 1.
But from code it doesn't look to me that way, could be I am wrong, but You are always sending
source_node_features
to theaggregate
function, meaning You are always using features on layer 0 of source nodes: memory + raw_features:And then later You are in
GraphSumEmbedding.calculate
doingThis is a problem since when you want to calculate embeddings on layer 2 You will be using
neighbor_embeddings
on layer 1, butsource_node_features
from layer 0.Why is that so:
because You are setting
source_node_features
at the beginning and not changing variable:I think that part of calculating
source_node_embeddings
is completely missing.This is the source code down below of
GraphEmbedding
:And code for
GraphSumEmbedding
The text was updated successfully, but these errors were encountered: