Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: openai/openai-python
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v1.68.2
Choose a base ref
...
head repository: openai/openai-python
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v1.69.0
Choose a head ref
  • 7 commits
  • 35 files changed
  • 3 contributors

Commits on Mar 27, 2025

  1. Verified

    This commit was signed with the committer’s verified signature. The key has expired.
    lukekarrys Luke Karrys
    Copy the full SHA
    2706bdd View commit details
  2. Copy the full SHA
    a4b9f40 View commit details
  3. Copy the full SHA
    2e73b52 View commit details
  4. Copy the full SHA
    8677d3c View commit details
  5. Copy the full SHA
    a639321 View commit details
  6. Copy the full SHA
    46ed48e View commit details
  7. release: 1.69.0

    stainless-app[bot] committed Mar 27, 2025
    Copy the full SHA
    a8fa0de View commit details
Showing with 186 additions and 1,886 deletions.
  1. +1 −1 .release-please-manifest.json
  2. +3 −1 .stats.yml
  3. +20 −0 CHANGELOG.md
  4. +1 −1 pyproject.toml
  5. +1 −1 src/openai/_models.py
  6. +2 −2 src/openai/_streaming.py
  7. +1 −1 src/openai/_utils/_transform.py
  8. +1 −1 src/openai/_version.py
  9. +10 −6 src/openai/resources/audio/speech.py
  10. +12 −4 src/openai/resources/beta/realtime/sessions.py
  11. +12 −1 src/openai/resources/responses/input_items.py
  12. +12 −12 src/openai/resources/responses/responses.py
  13. +0 −1,796 src/openai/resources/responses/responses.py.orig
  14. +8 −3 src/openai/types/audio/speech_create_params.py
  15. +7 −2 src/openai/types/beta/realtime/realtime_response.py
  16. +6 −2 src/openai/types/beta/realtime/response_create_event.py
  17. +4 −2 src/openai/types/beta/realtime/response_create_event_param.py
  18. +5 −1 src/openai/types/beta/realtime/session.py
  19. +4 −2 src/openai/types/beta/realtime/session_create_params.py
  20. +5 −1 src/openai/types/beta/realtime/session_create_response.py
  21. +6 −2 src/openai/types/beta/realtime/session_update_event.py
  22. +4 −2 src/openai/types/beta/realtime/session_update_event_param.py
  23. +4 −3 src/openai/types/beta/realtime/transcription_session_create_params.py
  24. +4 −3 src/openai/types/beta/realtime/transcription_session_update.py
  25. +4 −3 src/openai/types/beta/realtime/transcription_session_update_param.py
  26. +6 −1 src/openai/types/chat/chat_completion_audio_param.py
  27. +9 −0 src/openai/types/responses/input_item_list_params.py
  28. +2 −2 src/openai/types/responses/response.py
  29. +2 −2 src/openai/types/responses/response_create_params.py
  30. +7 −7 src/openai/types/responses/response_format_text_json_schema_config.py
  31. +7 −7 src/openai/types/responses/response_format_text_json_schema_config_param.py
  32. +8 −8 tests/api_resources/audio/test_speech.py
  33. +2 −2 tests/api_resources/beta/realtime/test_sessions.py
  34. +4 −4 tests/api_resources/chat/test_completions.py
  35. +2 −0 tests/api_resources/responses/test_input_items.py
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.68.2"
".": "1.69.0"
}
4 changes: 3 additions & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
configured_endpoints: 82
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-5ad6884898c07591750dde560118baf7074a59aecd1f367f930c5e42b04e848a.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6663c59193eb95b201e492de17dcbd5e126ba03d18ce66287a3e2c632ca56fe7.yml
openapi_spec_hash: 7996d2c34cc44fe2ce9ffe93c0ab774e
config_hash: 9351ea829c2b41da3b48a38c934c92ee
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 1.69.0 (2025-03-27)

Full Changelog: [v1.68.2...v1.69.0](https://github.com/openai/openai-python/compare/v1.68.2...v1.69.0)

### Features

* **api:** add `get /chat/completions` endpoint ([e6b8a42](https://github.com/openai/openai-python/commit/e6b8a42fc4286656cc86c2acd83692b170e77b68))


### Bug Fixes

* **audio:** correctly parse transcription stream events ([16a3a19](https://github.com/openai/openai-python/commit/16a3a195ff31f099fbe46043a12d2380c2c01f83))


### Chores

* add hash of OpenAPI spec/config inputs to .stats.yml ([515e1cd](https://github.com/openai/openai-python/commit/515e1cdd4a3109e5b29618df813656e17f22b52a))
* **api:** updates to supported Voice IDs ([#2261](https://github.com/openai/openai-python/issues/2261)) ([64956f9](https://github.com/openai/openai-python/commit/64956f9d9889b04380c7f5eb926509d1efd523e6))
* fix typos ([#2259](https://github.com/openai/openai-python/issues/2259)) ([6160de3](https://github.com/openai/openai-python/commit/6160de3e099f09c2d6ee5eeee4cbcc55b67a8f87))

## 1.68.2 (2025-03-21)

Full Changelog: [v1.68.1...v1.68.2](https://github.com/openai/openai-python/compare/v1.68.1...v1.68.2)
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.68.2"
version = "1.69.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
2 changes: 1 addition & 1 deletion src/openai/_models.py
Original file line number Diff line number Diff line change
@@ -721,7 +721,7 @@ def add_request_id(obj: BaseModel, request_id: str | None) -> None:
cast(Any, obj).__exclude_fields__ = {*(exclude_fields or {}), "_request_id", "__exclude_fields__"}


# our use of subclasssing here causes weirdness for type checkers,
# our use of subclassing here causes weirdness for type checkers,
# so we just pretend that we don't subclass
if TYPE_CHECKING:
GenericModel = BaseModel
4 changes: 2 additions & 2 deletions src/openai/_streaming.py
Original file line number Diff line number Diff line change
@@ -59,7 +59,7 @@ def __stream__(self) -> Iterator[_T]:
if sse.data.startswith("[DONE]"):
break

if sse.event is None or sse.event.startswith("response."):
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
data = sse.json()
if is_mapping(data) and data.get("error"):
message = None
@@ -161,7 +161,7 @@ async def __stream__(self) -> AsyncIterator[_T]:
if sse.data.startswith("[DONE]"):
break

if sse.event is None or sse.event.startswith("response."):
if sse.event is None or sse.event.startswith("response.") or sse.event.startswith('transcript.'):
data = sse.json()
if is_mapping(data) and data.get("error"):
message = None
2 changes: 1 addition & 1 deletion src/openai/_utils/_transform.py
Original file line number Diff line number Diff line change
@@ -126,7 +126,7 @@ def _get_annotated_type(type_: type) -> type | None:
def _maybe_transform_key(key: str, type_: type) -> str:
"""Transform the given `data` based on the annotations provided in `type_`.
Note: this function only looks at `Annotated` types that contain `PropertInfo` metadata.
Note: this function only looks at `Annotated` types that contain `PropertyInfo` metadata.
"""
annotated_type = _get_annotated_type(type_)
if annotated_type is None:
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.68.2" # x-release-please-version
__version__ = "1.69.0" # x-release-please-version
16 changes: 10 additions & 6 deletions src/openai/resources/audio/speech.py
Original file line number Diff line number Diff line change
@@ -53,7 +53,9 @@ def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
@@ -75,8 +77,8 @@ def create(
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
`verse`. Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).
instructions: Control the voice of your generated audio with additional instructions. Does not
@@ -142,7 +144,9 @@ async def create(
*,
input: str,
model: Union[str, SpeechModel],
voice: Literal["alloy", "ash", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer"],
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
],
instructions: str | NotGiven = NOT_GIVEN,
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
speed: float | NotGiven = NOT_GIVEN,
@@ -164,8 +168,8 @@ async def create(
`tts-1`, `tts-1-hd` or `gpt-4o-mini-tts`.
voice: The voice to use when generating the audio. Supported voices are `alloy`, `ash`,
`coral`, `echo`, `fable`, `onyx`, `nova`, `sage` and `shimmer`. Previews of the
voices are available in the
`ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`, `shimmer`, and
`verse`. Previews of the voices are available in the
[Text to speech guide](https://platform.openai.com/docs/guides/text-to-speech#voice-options).
instructions: Control the voice of your generated audio with additional instructions. Does not
16 changes: 12 additions & 4 deletions src/openai/resources/beta/realtime/sessions.py
Original file line number Diff line number Diff line change
@@ -65,7 +65,10 @@ def create(
tool_choice: str | NotGiven = NOT_GIVEN,
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"] | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
@@ -147,7 +150,8 @@ def create(
voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo` `sage`, `shimmer` and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.
extra_headers: Send extra headers
@@ -227,7 +231,10 @@ async def create(
tool_choice: str | NotGiven = NOT_GIVEN,
tools: Iterable[session_create_params.Tool] | NotGiven = NOT_GIVEN,
turn_detection: session_create_params.TurnDetection | NotGiven = NOT_GIVEN,
voice: Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"] | NotGiven = NOT_GIVEN,
voice: Union[
str, Literal["alloy", "ash", "ballad", "coral", "echo", "fable", "onyx", "nova", "sage", "shimmer", "verse"]
]
| NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
@@ -309,7 +316,8 @@ async def create(
voice: The voice the model uses to respond. Voice cannot be changed during the session
once the model has responded with audio at least once. Current voice options are
`alloy`, `ash`, `ballad`, `coral`, `echo` `sage`, `shimmer` and `verse`.
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`, `sage`,
`shimmer`, and `verse`.
extra_headers: Send extra headers
13 changes: 12 additions & 1 deletion src/openai/resources/responses/input_items.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,7 @@

from __future__ import annotations

from typing import Any, cast
from typing import Any, List, cast
from typing_extensions import Literal

import httpx
@@ -17,6 +17,7 @@
from ..._base_client import AsyncPaginator, make_request_options
from ...types.responses import input_item_list_params
from ...types.responses.response_item import ResponseItem
from ...types.responses.response_includable import ResponseIncludable

__all__ = ["InputItems", "AsyncInputItems"]

@@ -47,6 +48,7 @@ def list(
*,
after: str | NotGiven = NOT_GIVEN,
before: str | NotGiven = NOT_GIVEN,
include: List[ResponseIncludable] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
@@ -64,6 +66,9 @@ def list(
before: An item ID to list items before, used in pagination.
include: Additional fields to include in the response. See the `include` parameter for
Response creation above for more information.
limit: A limit on the number of objects to be returned. Limit can range between 1 and
100, and the default is 20.
@@ -94,6 +99,7 @@ def list(
{
"after": after,
"before": before,
"include": include,
"limit": limit,
"order": order,
},
@@ -130,6 +136,7 @@ def list(
*,
after: str | NotGiven = NOT_GIVEN,
before: str | NotGiven = NOT_GIVEN,
include: List[ResponseIncludable] | NotGiven = NOT_GIVEN,
limit: int | NotGiven = NOT_GIVEN,
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
@@ -147,6 +154,9 @@ def list(
before: An item ID to list items before, used in pagination.
include: Additional fields to include in the response. See the `include` parameter for
Response creation above for more information.
limit: A limit on the number of objects to be returned. Limit can range between 1 and
100, and the default is 20.
@@ -177,6 +187,7 @@ def list(
{
"after": after,
"before": before,
"include": include,
"limit": limit,
"order": order,
},
24 changes: 12 additions & 12 deletions src/openai/resources/responses/responses.py
Original file line number Diff line number Diff line change
@@ -149,8 +149,8 @@ def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
@@ -321,8 +321,8 @@ def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
@@ -486,8 +486,8 @@ def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
@@ -961,8 +961,8 @@ async def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
@@ -1133,8 +1133,8 @@ async def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
@@ -1298,8 +1298,8 @@ async def create(
context.
When using along with `previous_response_id`, the instructions from a previous
response will be not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
response will not be carried over to the next response. This makes it simple to
swap out system (or developer) messages in new responses.
max_output_tokens: An upper bound for the number of tokens that can be generated for a response,
including visible output tokens and
Loading