|
Provides a dynamic version of embedding_lookup
tfra.dynamic_embedding.embedding_lookup(
params,
ids,
partition_strategy=None,
name=None,
validate_indices=None,
max_norm=None,
return_trainable=(False)
)
similar with tf.nn.embedding_lookup.
Ids are flattened to a 1d tensor before being passed to embedding_lookup then, they are unflattend to match the original ids shape plus an extra leading dimension of the size of the embeddings.
params
: A dynamic_embedding.Variable instance.ids
: A tensor with any shape as same dtype of params.key_dtype.partition_strategy
: No used, for API compatiblity withnn.emedding_lookup
.name
: A name for the operation. Name is optional in graph mode and required in eager mode.validate_indices
: No used, just for compatible with nn.embedding_lookup .max_norm
: If notNone
, each embedding is clipped if its l2-norm is larger than this value.return_trainable
: optional, If True, also return TrainableWrapper. If in eager mode, it will return aShadowVariable
, which is eager derivative of TrainableWrapper. If inside tf.function scope, then set return_trainable is disabled. Please usedynamic_embedding.Variable.get_trainable_by_name
ordynamic_embedding.Variable.trainable_store
to get the created trainable shadow inside tf.function scope.
A tensor with shape [shape of ids] + [dim], dim is equal to the value dim of params. containing the values from the params tensor(s) for keys in ids.
trainable_wrap
: A TrainableWrapper object used to fill the Optimizersvar_list
Only provided ifreturn_trainable
is True. If in eager mode, it will be aShadowVariable
, which is eager derivative of TrainableWrapper.