New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to tokenize a string and return the decoded tokens using the correct BPE model #17
Conversation
…the correct BPE model The _decode_native_and_split function decodes encoded BPE tokens into their corresponding byte arrays and returns a vector of vectors of bytes. The split_by_token_with_special_tokens function takes in a string, encodes it using the BPE model, and then decodes the encoded tokens into a vector of strings. This allows for tokenizing a string and returning the decoded tokens using the correct BPE model. Added a corresponding test (cl100k_split_test) to tests/tiktoken.rs.
tiktoken-rs/src/vendor_tiktoken.rs
Outdated
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> { | ||
// first, encode the text using the BPE model | ||
let encoded = self.encode_with_special_tokens(text); | ||
|
||
let tokenized = self._decode_native_and_split(&encoded); | ||
|
||
tokenized | ||
.iter() | ||
.map(|token| String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string()))) | ||
.collect() | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: With the subfunctions you defined, I see some unnecessary clones and Vec collections and memory allocations. Consider inlining _decode_native_and_split
or changing the function signatures to avoid clones
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> { | |
// first, encode the text using the BPE model | |
let encoded = self.encode_with_special_tokens(text); | |
let tokenized = self._decode_native_and_split(&encoded); | |
tokenized | |
.iter() | |
.map(|token| String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string()))) | |
.collect() | |
} | |
pub fn split_by_token_with_special_tokens(&self, text: &str) -> Result<Vec<String>> { | |
// first, encode the text using the BPE model | |
let encoded = self.encode_with_special_tokens(text); | |
encoded | |
.iter() | |
.map(|token| { | |
let token = self | |
.decoder | |
.get(token) | |
.unwrap_or_else(|| &self.special_tokens_decoder[token]); | |
String::from_utf8(token.clone()).map_err(|e| anyhow!(e.to_string())) | |
}) | |
.collect() | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Disclaimer: I haven't thoroughly benchmarked the code so I'm not sure how big of an impact this would have on perf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may have gone too far in the other direction, but now the function returns an iterator and no work is done until the user runs it. Also eliminated unnecessary clones. It makes the api a little less straightforward but I don't think it's too bad.
Thanks for the contribution! I left some comments in the code before I can merge this. Also make sure to run |
Add the "Eq" trait to the "ChatCompletionRequestMessage" struct to allow for easy comparison with other structs.
Add a new method to CoreBPE, split_by_token_with_special_tokens(), which takes a string slice containing the text to be tokenized, encodes it using the BPE model, and decodes the encoded tokens into a vector of strings. The resulting iterator yields each token as a Result<String> to handle decoding errors. The method includes a test to ensure its correctness.
Ran |
This pull request adds the ability to tokenize a string and return the decoded tokens using the correct Byte Pair Encoding (BPE) model in the Rust tokenizer "tiktoken". The current functionality of tiktoken only allows encoding a string into a vector of references to the token table or decoding it back into the original string.
Changes include:
_decode_native_and_split
that decodes encoded BPE tokens into their corresponding byte arrays and returns a vector of vectors of bytes.split_by_token_with_special_tokens
that takes in a string, encodes it using the BPE model, and then decodes the encoded tokens into a vector of strings. This allows for tokenizing a string and returning the decoded tokens using the correct BPE model.cl100k_split_test
has been added to tests/tiktoken.rs to ensure the correct behavior of the new functions.Example usage:
With these changes, users can now utilize the tiktoken library to tokenize and decode text using the correct BPE model, enhancing its functionality and usability.
Closes #16