View source on GitHub |
Provides a dynamic version of tf.nn.safe_embedding_lookup_sparse
.
tfra.dynamic_embedding.safe_embedding_lookup_sparse(
embedding_weights,
sparse_ids,
sparse_weights=None,
combiner='mean',
default_id=None,
name='safe_embedding_lookup_sparse',
partition_strategy=None,
max_norm=None,
return_trainable=(False)
)
Lookup embedding results, accounting for empty features and invalid weights.
Any IDs will be treated as valid include non-positive IDs.
Invalid weights (<= 0) are pruned from input weights, as well as any IDs
with non-positive weight. For an entry with no features, the embedding vector
for default_id
is returned, or the 0-vector if default_id
is not supplied.
The ids and weights may be multi-dimensional. Embeddings are always aggregated along the last dimension.
embedding_weights
: A singledynamic_embedding.Variable
instance representing the complete embedding tensor.sparse_ids
:SparseTensor
of shape[d_0, d_1, ..., d_n]
containing the ids.d_0
is typically batch size.sparse_weights
:SparseTensor
of same shape assparse_ids
, containing float weights corresponding tosparse_ids
, orNone
if all weights are be assumed to be 1.0.combiner
: A string specifying how to combine embedding results for each entry. Currently "mean", "sqrtn" and "sum" are supported, with "mean" the default.default_id
: The id to use for an entry with no features.name
: A name for this operation. Name is optional in graph mode and required in eager mode.partition_strategy
: A string specifying the partitioning strategy. Currently"div"
and"mod"
are supported. Default is"div"
.max_norm
: If notNone
, all embeddings are l2-normalized to max_norm before combining.
combined_embeddings
: A denseTensor
of shape[d_0, d_1, ..., d_{n-1}, e_1, ..., e_m]
.trainable_wrap
: A TrainableWrapper object used to fill the Optimizersvar_list
Only provided ifreturn_trainable
is True.
ValueError
: ifembedding_weights
is empty.