Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Ollama API openai.APIConnectionError: Connection error. #1893

Open
2 tasks done
Mookins opened this issue May 18, 2024 · 4 comments
Open
2 tasks done

[Bug]: Ollama API openai.APIConnectionError: Connection error. #1893

Mookins opened this issue May 18, 2024 · 4 comments
Labels
bug Something isn't working waiting for input Need more information from the author to proceed further.

Comments

@Mookins
Copy link

Mookins commented May 18, 2024

Is there an existing issue for the same bug?

Describe the bug

Request sent to Ollama though openai compatible API loads the model in ollama and then errors out in opendevin:

==============
STEP 0

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

20:55:15 - opendevin:ERROR: agent_controller.py:109 - Error while running the agent: OpenAIException - Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 113, in iter
for part in self._httpcore_stream:
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 367, in iter
raise exc from None
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 363, in iter
for part in self._stream:
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 349, in iter
raise exc
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 341, in iter
for chunk in self._connection._receive_response_body(**kwargs):
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body
event = self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event
with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
File "/usr/local/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/app/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (received 0 bytes, expected 407)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 952, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 928, in send
raise exc
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 922, in send
response.read()
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 813, in read
self._content = b"".join(self.iter_bytes())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 829, in iter_bytes
for raw_bytes in self.iter_raw():
File "/app/.venv/lib/python3.12/site-packages/httpx/_models.py", line 883, in iter_raw
for raw_stream_bytes in self.stream:
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 126, in iter
for chunk in self._stream:
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 112, in iter
with map_httpcore_exceptions():
File "/usr/local/lib/python3.12/contextlib.py", line 158, in exit
self.gen.throw(value)
File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (received 0 bytes, expected 407)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 533, in completion
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 492, in completion
response = openai_client.chat.completions.create(**data, timeout=timeout) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 590, in create
return self._post(
^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1240, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 976, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 976, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 986, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

20:55:15 - opendevin:INFO: agent_controller.py:150 - Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR

Current Version

ghcr.io/opendevin/opendevin:main

Installation and Configuration

docker run \
    -it \
    --pull=always \
    -e SANDBOX_USER_ID=$(id -u) \
	-e LLM_MODEL="openai/codellama:7b" \
    -e LLM_API_KEY="sk-XXX" \
    -e LLM_BASE_URL="http://ollama.local:3000/ollama/v1" \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    ghcr.io/opendevin/opendevin:main

Model and Agent

Does it with all models and agents I have tried.

Reproduction Steps

No response

Logs, Errors, Screenshots, and Additional Context

No response

@Mookins Mookins added the bug Something isn't working label May 18, 2024
@Mookins Mookins changed the title [Bug]: [Bug]: Ollama API openai.APIConnectionError: Connection error. May 18, 2024
@Mookins
Copy link
Author

Mookins commented May 18, 2024

Tried using the ollama/MODEL tag but that just seems to ignore my API key and get a 401 error since it doesn't authenticate, the API is closed and needs the key. The endpoints and API keys are working fine in other applications.

@enyst
Copy link
Collaborator

enyst commented May 18, 2024

Have you tried to set the model in the UI, after you start?

A model name that should work is the name returned by ollama list.

@Mookins
Copy link
Author

Mookins commented May 18, 2024

Yep, tried it every way. It seems like it is loading the model into GPU and then just exiting or the connection closes for whatever reason. Nvidia-SMI shows it loads up and starts.

@SmartManoj
Copy link
Collaborator

Run this to check whether LLM is working properly.

from litellm import completion
from datetime import datetime
config = {
    'LLM_MODEL': 'gpt-4-turbo-2024-04-09',
    'LLM_API_KEY': 'your-api-key',
    'LLM_BASE_URL': None
}

messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config['LLM_BASE_URL'],
                      messages=messages)

content = response.choices[0].message.content
print(content)

if '8' in content:
    print('--> Correct answer! 🎉')
    print('There are still 10 books in the room; reading them does not reduce the count. Consider exploring more accurate models for better results.')

dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

@SmartManoj SmartManoj added the waiting for input Need more information from the author to proceed further. label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working waiting for input Need more information from the author to proceed further.
Projects
None yet
Development

No branches or pull requests

3 participants