|
1 | 1 | # Release History |
2 | 2 |
|
3 | | -## 2.8.0 (Unreleased) |
4 | | - |
5 | | -### Acknowledgments |
| 3 | +## 2.8.0 (2025-12-11) |
6 | 4 |
|
7 | 5 | ### Features Added |
8 | 6 |
|
9 | | -### Bugs Fixed |
| 7 | +- OpenAI.Chat: |
| 8 | + - Added the `SafetyIdentifier` property to `ChatCompletionOptions`, which enables users to specify a stable identifier that can be used to help detect end-users of their application that may be violating OpenAI's usage policies. |
| 9 | +- OpenAI.Responses: |
| 10 | + - Added the `SafetyIdentifier` property to `CreateResponseOptions` and `ResponseResult`, which enables users to specify a stable identifier that can be used to help detect end-users of their application that may be violating OpenAI's usage policies. |
| 11 | + - Added the `ConversationOptions` property to `CreateResponseOptions` and `ResponseResult`, which enables users to automatically manage the state of a conversation in a multi-turn interaction by persisting state and sharing context across subsequent responses, rather than having to chain multiple response items together. |
| 12 | + - Added the `MaxToolCallCount` property to `CreateResponseOptions` and `ResponseResult`, which enables users to set the maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. |
| 13 | + - Added the `TopLogProbabilityCount` property to `CreateResponseOptions` and `ResponseResult`, which enables users to specify the number of most likely tokens to return at each token position, each with an associated log probability. |
| 14 | + - Added the `IncludedProperties` property to `CreateResponseOptions`, which enables users to specify additional output data to be included in the model response. |
| 15 | + - Added a setter to the `Id` property of `ResponseItem`. |
| 16 | + - Added a setter to the `Status` property of the types derived from `ResponseItem`. |
10 | 17 |
|
11 | | -### Other Changes |
| 18 | +### Breaking Changes in Preview APIs |
| 19 | + |
| 20 | +- OpenAI.Responses: |
| 21 | + - Until now, this feature area has been marked as experimental via the `[Experimental]` attribute. As we prepare to stabilize it and remove its experimental designation, we are cleaning up the APIs to better align them with the service REST APIs, as well as to offer more flexibility and improve usability. See our [examples](https://github.com/openai/openai-dotnet/tree/main/examples/Responses) for helpful references on how to use the updated APIs. |
| 22 | + - The `OpenAIResponseClient` class has been renamed to `ResponsesClient`. |
| 23 | + - The `ResponseCreationOptions` class has been renamedto `CreateResponseOptions`. |
| 24 | + - The `OpenAIResponse` class has been renamed to `ResponseResult`. |
| 25 | + - When calling the `CreateResponse`, `CreateResponseAsync`, `CreateResponseStreaming` and `CreateResponseStreamingAsync` methods of the `ResponsesClient` with a `CreateResponseOptions` argument, the input items must now be specified via the new `InputItems` property of `CreateResponseOptions`. |
| 26 | + - When calling the `CreateResponseStreaming` and `CreateResponseStreamingAsync` methods of the `ResponsesClient` with a `CreateResponseOptions` argument, the `StreamingEnabled` property of `CreateResponseOptions` must be set to `true`. |
| 27 | + - The `OpenAIResponsesModelFactory` class used for mocking output models (e.g., `ResponseResult`, `ResponseTokenUsage`, etc.) has been removed in favor of adding setters to the properties of these models. |
12 | 28 |
|
13 | 29 | ## 2.7.0 (2025-11-13) |
14 | 30 |
|
|
0 commit comments