-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kv_override issue with string values #1487
Comments
@Erhan1706 thanks for reporting, pushed a fix and should be in the next release. |
Hey @abetlen,
I'm also getting this warning when running llama 3-based models. I don't pass any Thanks for the great work. |
I had the same error when running From what I understood the issue is caused by a bug in the actual @Spider-netizen I think that passing a |
Thanks @dgengler6. I'll give it a try. Appreciate it. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Expected llama-cpp-python to correctly override the model parameters when passing {"tokenizer.ggml.pre": "llama3"} to kv_override.
Current Behavior
The string value to override always appears to be empty upon running the model as
validate_override: Using metadata override ( str) 'tokenizer.ggml.pre' =
indicates, and thus the model ends up using the default pre-tokenizer instead of the llama3 one.Example output:
Environment and Context
$ lscpu
$ uname -a
Linux LAPTOP 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Failure Information (for bugs)
Steps to Reproduce
I'm running the following code:
Findings
I checked out the code using a debugger and the problem seems to be on the following line:
For some reason memmove is not properly copying the string.
The text was updated successfully, but these errors were encountered: