Skip to content

Latest commit

 

History

History
74 lines (56 loc) · 2.66 KB

embedding_lookup.md

File metadata and controls

74 lines (56 loc) · 2.66 KB

tfra.dynamic_embedding.embedding_lookup

View source on GitHub




Provides a dynamic version of embedding_lookup

tfra.dynamic_embedding.embedding_lookup(
    params,
    ids,
    partition_strategy=None,
    name=None,
    validate_indices=None,
    max_norm=None,
    return_trainable=(False)
)

similar with tf.nn.embedding_lookup.

Ids are flattened to a 1d tensor before being passed to embedding_lookup then, they are unflattend to match the original ids shape plus an extra leading dimension of the size of the embeddings.

Args:

  • params: A dynamic_embedding.Variable instance.
  • ids: A tensor with any shape as same dtype of params.key_dtype.
  • partition_strategy: No used, for API compatiblity with nn.emedding_lookup.
  • name: A name for the operation. Name is optional in graph mode and required in eager mode.
  • validate_indices: No used, just for compatible with nn.embedding_lookup .
  • max_norm: If not None, each embedding is clipped if its l2-norm is larger than this value.
  • return_trainable: optional, If True, also return TrainableWrapper. If in eager mode, it will return a ShadowVariable, which is eager derivative of TrainableWrapper. If inside tf.function scope, then set return_trainable is disabled. Please use dynamic_embedding.Variable.get_trainable_by_name or dynamic_embedding.Variable.trainable_store to get the created trainable shadow inside tf.function scope.

Returns:

A tensor with shape [shape of ids] + [dim], dim is equal to the value dim of params. containing the values from the params tensor(s) for keys in ids.

  • trainable_wrap: A TrainableWrapper object used to fill the Optimizers var_list Only provided if return_trainable is True. If in eager mode, it will be a ShadowVariable, which is eager derivative of TrainableWrapper.