Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llamafile as LLM server for Mantella mod and Skyrim, is working nice but there is a little problem. #415

Open
amonpaike opened this issue May 12, 2024 · 6 comments

Comments

@amonpaike
Copy link

Mantella mod introduces the possibility of talking to Skyrim NPCs, revolutionizing the way of playing this RPG, making it a unique experience ever seen.
Officially the author of the mod relies on the llm server koboldcpp.
Unfortunately koboldcpp with cuda crashes on my pc because my processor doesn't support avx2, while the other "blas" accelerant are too slow. So as an alternative i use llamafile, is working nice and smart, is very light and very performing on my 3060 with 12GB. The only problem is that every time I have to start a conversation, in order for the llm to generate the response, I have to briefly "alt+tab" to "exit and re-enter the game" so that llamafile generates the response and it triggers the loop with the voice speech, it also works for multiple comments, but then after it asks a new question to the npc, I have to "alt+tab" again to trigger the llm server. I was wondering what it could be and if there is a way to overcome this problem.

@mofosyne
Copy link
Collaborator

Rewritten for clarity, please confirm if correct.

Bug Report

Issue:
When using llamafile with the Mantella mod in Skyrim, I have to briefly "alt+tab" (exit and re-enter the game) to trigger the LLM server response. This step is necessary each time I start a conversation or ask a new question to an NPC, which disrupts the gameplay experience.

Steps to Reproduce:

  1. Start Skyrim with the Mantella mod and llamafile as the LLM server.
  2. Initiate a conversation with an NPC.
  3. To trigger the LLM response and voice speech loop, "alt+tab" out of the game and then back in.
  4. The response is generated, and the conversation continues.
  5. Ask a new question to the NPC.
  6. Repeat "alt+tab" to trigger the next LLM response.

Expected Behavior:
The LLM server should generate responses seamlessly during gameplay without requiring "alt+tab" actions.


Background

The Mantella mod introduces the ability to talk to NPCs in Skyrim, revolutionizing the RPG experience. The mod author uses koboldcpp for the LLM server, but it crashes on my PC due to lack of AVX2 support. Other accelerants like "blas" are too slow.

As an alternative, I am using llamafile, which is efficient and performs well on my NVIDIA 3060 GPU with 12GB VRAM. However, the need to "alt+tab" to trigger responses is the primary issue I need to resolve.

@amonpaike
Copy link
Author

@mofosyne thank you very much, I'm not very good at reporting bug reports, next time I'll try to force myself to do my best.

@jart
Copy link
Collaborator

jart commented May 21, 2024

This is Windows correct? llamafile is a CLI application. How would the state of the Window manager impact its operation?

@mofosyne
Copy link
Collaborator

Is it possible that it's a bug in the mod? Maybe give the mod writer a poke and link this issue to them and see if they reply.

@amonpaike
Copy link
Author

amonpaike commented May 21, 2024

This is Windows correct? llamafile is a CLI application. How would the state of the Window manager impact its operation?

yes is windows, llamafile as a cli application:
.\llamafile.exe -m C:\Users\noki\gguf\Mantella-Skyrim-Llama-3-8B-Q4_K_M.gguf -ngl 9999

Mantella LLM is form here but it is irrelevant it happens with any llm model.

Is it possible that it's a bug in the mod? Maybe give the mod writer a poke and link this issue to them and see if they reply.

To be honest I don't even know if it's a llamafile issue. I also reported it to the author of the mod when I wrote the issue here.

@amonpaike
Copy link
Author

amonpaike commented May 21, 2024

in case sameone want to try this is the quick tutorial (is not exhaustive, refer to the official tutorials for correct step by step)

you need :

  1. skyrim installed with various mods attached to make "mantella" work well (read mantella tutorial)

  2. download the mantella spell (the mod for skyrim) and mantella software (inteconnects all) from here

  3. download the xvasynt (the voices for the npcs) from here

  4. modify the config.ini in the mantella sofware to indicate the llamafile LLM_server
    llm_api = http://127.0.0.1:8080/v1

  • run skyrim whit the necessary mods enabled
  • run the xvasynt server for the voices (need same voices downloaded from the nexusmods site)
  • run the llamafile server for the llm interaction
  • run the mantella software for interconnecting all

play the game (go to mods configuration in the game for mantella spell customizations and shortcuts )

https://www.youtube.com/watch?v=FLmbd48r2Wo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants