You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the "Convert the model to ggml" section of the README.md, what is the "" parameter referring to? Following the instructions in MiniGPT4, I ended up with a folder containing the model weights. What specific file should I point to with this parameter?
The text was updated successfully, but these errors were encountered:
In the "Convert the model to ggml" section of the README.md, what is the "" parameter referring to? Following the instructions in MiniGPT4, I ended up with a folder containing the model weights. What specific file should I point to with this parameter?
I made a mistake. convert.py should be under llama.cpp instead of minigpt4. However, there is now a new error : 'Exception: Vocab size mismatch (model has 32001, but MiniGPT-4/model/weight_7b/tokenizer.model has 32000). Most likely you are missing added_tokens.json (should be in MiniGPT-4/model/weight_7b).'. It worked fine in Minigpt-4. How should I handle this problem? What is missing?
Add a added_tokens.json with id of 32001 (I think) in the folder that contains the vocab and model.
This is a problem with how pytorch loads the file and how minigpt4 loads it I believe. I didn't explore it in depth, but after adding the json file, that should fix it and convert.py should convert to the correct weights.
In the "Convert the model to ggml" section of the README.md, what is the "" parameter referring to? Following the instructions in MiniGPT4, I ended up with a folder containing the model weights. What specific file should I point to with this parameter?
The text was updated successfully, but these errors were encountered: