Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #35447 Tokenizer does not split text according to newly added input tokens #35455

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jiongjiongli
Copy link

Fix Bug

#35447 : Tokenizer does not split text according to newly added input tokens

Resolution

Method Trie.split: Add steps to ignore partial match that should be removed

@ArthurZucker and @itazap

@jiongjiongli jiongjiongli force-pushed the fix-tokenizer-text-split branch 2 times, most recently from cf128ad to fbd9036 Compare January 1, 2025 00:19
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey ! Sorry but I cannot seem to reproduce!
image

@jiongjiongli
Copy link
Author

Hey ! Sorry but I cannot seem to reproduce! image

Hello @ArthurZucker , this issue repos using tokenizer.add_tokens(["red", "e"]) rather than tokenizer.add_tokens(["e"]) .

Below is repro code and actual result:

# Code:
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased", use_fast=False)
tokenizer.add_tokens(["red", "e"])
output_tokens = tokenizer.tokenize("read")
print(f"Output tokens: {output_tokens}")

# Actual Output:
Output tokens: ['read']

@jiongjiongli jiongjiongli force-pushed the fix-tokenizer-text-split branch from 12b6aa8 to 70d51bf Compare January 12, 2025 22:19
@ArthurZucker ArthurZucker removed the request for review from Rocketknight1 January 13, 2025 10:28
…newly added input tokens

The root reason is Trie.split method didn't ignore partial match that should be removed

Add test case to token split
@jiongjiongli jiongjiongli force-pushed the fix-tokenizer-text-split branch from 70d51bf to 56c52db Compare January 20, 2025 04:37
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, sorry!
Could you add another test with another model for example? 🤗
Seems to not work for ? for example or for emojis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants