Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to tokenize a string and return the decoded tokens using the correct BPE model #17

Merged
merged 3 commits into from Apr 16, 2023

Conversation

jackbackes
Copy link
Contributor

@jackbackes jackbackes commented Apr 15, 2023

This pull request adds the ability to tokenize a string and return the decoded tokens using the correct Byte Pair Encoding (BPE) model in the Rust tokenizer "tiktoken". The current functionality of tiktoken only allows encoding a string into a vector of references to the token table or decoding it back into the original string.

Changes include:

  1. A new function _decode_native_and_split that decodes encoded BPE tokens into their corresponding byte arrays and returns a vector of vectors of bytes.
  2. A new function split_by_token_with_special_tokens that takes in a string, encodes it using the BPE model, and then decodes the encoded tokens into a vector of strings. This allows for tokenizing a string and returning the decoded tokens using the correct BPE model.
  3. A new test cl100k_split_test has been added to tests/tiktoken.rs to ensure the correct behavior of the new functions.

Example usage:

let bpe = cl100k_base().unwrap();
let tokenized = bpe.split_by_token_with_special_tokens("This is a test         with a lot of spaces").unwrap();
assert_eq!(tokenized, vec!["This", " is", " a", " test", "        ", " with", " a", " lot", " of", " spaces"]);

With these changes, users can now utilize the tiktoken library to tokenize and decode text using the correct BPE model, enhancing its functionality and usability.

Closes #16

…the correct BPE model

The _decode_native_and_split function decodes encoded BPE tokens into their corresponding byte arrays and returns a vector of vectors of bytes. The split_by_token_with_special_tokens function takes in a string, encodes it using the BPE model, and then decodes the encoded tokens into a vector of strings. This allows for tokenizing a string and returning the decoded tokens using the correct BPE model. Added a corresponding test (cl100k_split_test) to tests/tiktoken.rs.
Comment on lines 559 to 569
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> {
// first, encode the text using the BPE model
let encoded = self.encode_with_special_tokens(text);

let tokenized = self._decode_native_and_split(&encoded);

tokenized
.iter()
.map(|token| String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string())))
.collect()
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: With the subfunctions you defined, I see some unnecessary clones and Vec collections and memory allocations. Consider inlining _decode_native_and_split or changing the function signatures to avoid clones

Suggested change
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> {
// first, encode the text using the BPE model
let encoded = self.encode_with_special_tokens(text);
let tokenized = self._decode_native_and_split(&encoded);
tokenized
.iter()
.map(|token| String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string())))
.collect()
}
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> {
// first, encode the text using the BPE model
let encoded = self.encode_with_special_tokens(text);
encoded
.iter()
.map(|token| {
let token = self
.decoder
.get(token)
.unwrap_or_else(|| &self.special_tokens_decoder[token]);
String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string()))
})
.collect()
}

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disclaimer: I haven't thoroughly benchmarked the code so I'm not sure how big of an impact this would have on perf

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may have gone too far in the other direction, but now the function returns an iterator and no work is done until the user runs it. Also eliminated unnecessary clones. It makes the api a little less straightforward but I don't think it's too bad.

@zurawiki
Copy link
Owner

zurawiki commented Apr 15, 2023

Thanks for the contribution! I left some comments in the code before I can merge this.

Also make sure to run just fix and commit those changes locally to pass the rust linter in CI

Add the "Eq" trait to the "ChatCompletionRequestMessage" struct to allow for easy comparison with other structs.
Add a new method to CoreBPE, split_by_token_with_special_tokens(), which takes a string slice containing the text to be tokenized, encodes it using the BPE model, and decodes the encoded tokens into a vector of strings. The resulting iterator yields each token as a Result<String> to handle decoding errors. The method includes a test to ensure its correctness.
@jackbackes
Copy link
Contributor Author

Ran just fix! Thanks for the feedback!

@zurawiki zurawiki merged commit 2ada6ca into zurawiki:main Apr 16, 2023
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use case: splitting text into tokens.
2 participants