-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
isWithinTokenLimit fails due to lack of model being provided #26
Comments
I am having the same issue, I get an error when trying to use isWithinTokenLimit without passing a model_name. even more when I try to use just encodeChat passing the messages and model, I get the following error: |
Joining the issue |
just FYI guys, as an alternative to this package I have been using : https://www.npmjs.com/package/gpt-tokens; https://github.com/Cainier/gpt-tokens is working fine and offers similar functionalities |
Works well, thanks! |
I had this issue, I fixed it by adding the model to the import: const { isWithinTokenLimit } = require('gpt-tokenizer/model/gpt-4-0314'); Hope this helps |
Unsure if this is just a documentation issue, however, after checking the source code there appears to be no default or clear instruction on how to provide the model type for tokenization in the isWithinTokenLimit function.
I'll happily open a PR on this but want to know if there's any kind of contribution guidelines.
To reproduce, run as such
isWithinTokenLimit(messages, MAX_CHAT_LENGTH)
where messages is a ChatMessage iterable and MAX_CHAT_LENGTH is a token length the chain should not be longer than.The text was updated successfully, but these errors were encountered: