Replies: 1 comment 1 reply
-
I noticed this as well. My solution was to install Ollama locally and OpenWebUI in docker. This fixed that issue for me. I do not know what causes this issue other than just Ollama not liking the extra layer of virtualization in some setups. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I installed a Docker image and used WebUI to associate it with the local server. After I successfully deployed it, for example, I retrieved llama3-7b from the Ollama library and asked questions on the Web-UI interface. If there were any problems, it would take a long time to respond and the generation process would be slow. What is the problem
Beta Was this translation helpful? Give feedback.
All reactions