You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+195-2Lines changed: 195 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,14 +11,207 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
11
11
12
12
### New features
13
13
14
-
* Adds vLLM support via a new `ChatVLLM` class. (#24)
14
+
* Added `ChatDeepSeek()` for chatting via [DeepSeek](https://www.deepseek.com/). (#147)
15
+
* Added `ChatOpenRouter()` for chatting via [Open Router](https://openrouter.ai/). (#148)
16
+
* Added `ChatHuggingFace()` for chatting via [Hugging Face](https://huggingface.co/). (#144)
17
+
* Added `ChatPortkey()` for chatting via [Portkey AI](https://portkey.ai/). (#143)
18
+
* Added `ChatVllm()` for chatting via [vLLM](https://docs.vllm.ai/en/latest/). (#24)
19
+
20
+
### Bug fixes
21
+
22
+
* Fixed an issue where chatting with some models was leading to `KeyError: 'cached_input'`. (#149)
23
+
24
+
## [0.9.2] - 2025-08-08
25
+
26
+
### Improvements
27
+
28
+
*`Chat.get_cost()` now covers many more models and also takes cached tokens into account. (#133)
29
+
* Avoid erroring when tool calls occur with recent versions of `openai` (> v1.99.5). (#141)
30
+
31
+
32
+
## [0.9.1] - 2025-07-09
33
+
34
+
### Bug fixes
35
+
36
+
* Fixed an issue where `.chat()` wasn't streaming output properly in (the latest build of) Positron's Jupyter notebook. (#131)
37
+
38
+
* Needless warnings and errors are no longer thrown when model pricing info is unavailable. (#132)
39
+
40
+
## [0.9.0] - 2025-07-02
41
+
42
+
### New features
43
+
44
+
*`Chat` gains a handful of new methods:
45
+
*`.register_mcp_tools_http_stream_async()` and `.register_mcp_tools_stdio_async()`: for registering tools from a [MCP server](https://modelcontextprotocol.io/). (#39)
46
+
*`.get_tools()` and `.set_tools()`: for fine-grained control over registered tools. (#39)
47
+
*`.set_model_params()`: for setting common LLM parameters in a model-agnostic fashion. (#127)
48
+
*`.get_cost()`: to get the estimated cost of the chat. Only popular models are supported, but you can also supply your own token prices. (#106)
49
+
*`.add_turn()`: to add `Turn`(s) to the current chat history. (#126)
50
+
* Tool functions passed to `.register_tool()` can now `yield` numerous results. (#39)
51
+
* A `ContentToolResultImage` content class was added for returning images from tools. It is currently only works with `ChatAnthropic`. (#39)
52
+
* A `Tool` can now be constructed from a pre-existing tool schema (via a new `__init__` method). (#39)
53
+
* The `Chat.app()` method gains a `host` parameter. (#122)
54
+
*`ChatGithub()` now supports the more standard `GITHUB_TOKEN` environment variable for storing the API key. (#123)
55
+
56
+
### Changes
57
+
58
+
#### Breaking Changes
59
+
60
+
*`Chat` constructors (`ChatOpenAI()`, `ChatAnthropic()`, etc) no longer have a `turns` keyword parameter. Use the `.set_turns()` method instead to set the (initial) chat history. (#126)
61
+
*`Chat`'s `.tokens()` methods have been removed in favor of `.get_tokens()` which returns both cumulative tokens in the turn and discrete tokens. (#106)
62
+
63
+
#### Other Changes
64
+
65
+
*`Tool`'s constructor no longer takes a function as input. Use the new `.from_func()` method instead to create a `Tool` from a function. (#39)
66
+
*`.register_tool()` now throws an exception when the tool has the same name as an already registered tool. Set the new `force` parameter to `True` to force the registration. (#39)
67
+
68
+
### Improvements
69
+
70
+
*`ChatGoogle()` and `ChatVertex()` now default to Gemini 2.5 (instead of 2.0). (#125)
71
+
*`ChatOpenAI()` and `ChatGithub()` now default to GPT 4.1 (instead of 4o). (#115)
72
+
*`ChatAnthropic()` now supports `content_image_url()`. (#112)
73
+
* HTML styling improvements for `ContentToolResult` and `ContentToolRequest`. (#39)
74
+
*`Chat`'s representation now includes cost information if it can be calculated. (#106)
75
+
*`token_usage()` includes cost if it can be calculated. (#106)
76
+
77
+
### Bug fixes
78
+
79
+
* Fixed an issue where `httpx` client customization (e.g., `ChatOpenAI(kwargs = {"http_client": httpx.Client()})`) wasn't working as expected (#108)
80
+
81
+
### Developer APIs
82
+
83
+
* The base `Provider` class now includes a `name` and `model` property. In order for them to work properly, provider implementations should pass a `name` and `model` along to the `__init__()` method. (#106)
84
+
*`Provider` implementations must implement two new abstract methods: `translate_model_params()` and `supported_model_params()`.
85
+
86
+
## [0.8.1] - 2025-05-30
87
+
88
+
* Fixed `@overload` definitions for `.stream()` and `.stream_async()`.
89
+
90
+
## [0.8.0] - 2025-05-30
91
+
92
+
### New features
93
+
94
+
* New `.on_tool_request()` and `.on_tool_result()` methods register callbacks that fire when a tool is requested or produces a result. These callbacks can be used to implement custom logging or other actions when tools are called, without modifying the tool function (#101).
95
+
* New `ToolRejectError` exception can be thrown from tool request/result callbacks or from within a tool function itself to prevent the tool from executing. Moreover, this exception will provide some context for the the LLM to know that the tool didn't produce a result because it was rejected. (#101)
96
+
97
+
### Improvements
98
+
99
+
* The `CHATLAS_LOG` environment variable now enables logs for the relevant model provider. It now also supports a level of `debug` in addition to `info`. (#97)
100
+
*`ChatSnowflake()` now supports tool calling. (#98)
101
+
*`Chat` instances can now be deep copied, which is useful for forking the chat session. (#96)
102
+
103
+
### Changes
104
+
105
+
*`ChatDatabricks()`'s `model` now defaults to `databricks-claude-3-7-sonnet` instead of `databricks-dbrx-instruct`. (#95)
106
+
*`ChatSnowflake()`'s `model` now defaults to `claude-3-7-sonnet` instead of `llama3.1-70b`. (#98)
107
+
108
+
### Bug fixes
109
+
110
+
* Fixed an issue where `ChatDatabricks()` with an Anthropic `model` wasn't handling empty-string responses gracefully. (#95)
111
+
112
+
113
+
## [0.7.1] - 2025-05-10
114
+
115
+
* Added `openai` as a hard dependency, making installation easier for a wide range of use cases. (#91)
116
+
117
+
## [0.7.0] - 2025-04-22
118
+
119
+
### New features
120
+
121
+
* Added `ChatDatabricks()`, for chatting with Databrick's [foundation models](https://docs.databricks.com/aws/en/machine-learning/model-serving/score-foundation-models). (#82)
122
+
*`.stream()` and `.stream_async()` gain a `content` argument. Set this to `"all"` to include `ContentToolResult`/`ContentToolRequest` objects in the stream. (#75)
123
+
*`ContentToolResult`/`ContentToolRequest` are now exported to `chatlas` namespace. (#75)
124
+
*`ContentToolResult`/`ContentToolRequest` gain a `.tagify()` method so they render sensibly in a Shiny app. (#75)
125
+
* A tool can now return a `ContentToolResult`. This is useful for:
126
+
* Specifying the format used for sending the tool result to the chat model (`model_format`). (#87)
127
+
* Custom rendering of the tool result (by overriding relevant methods in a subclass). (#75)
128
+
*`Chat` gains a new `.current_display` property. When a `.chat()` or `.stream()` is currently active, this property returns an object with a `.echo()` method (to echo new content to the display). This is primarily useful for displaying custom content during a tool call. (#79)
129
+
130
+
### Improvements
131
+
132
+
* When a tool call ends in failure, a warning is now raised and the stacktrace is printed. (#79)
133
+
* Several improvements to `ChatSnowflake()`:
134
+
*`.extract_data()` is now supported.
135
+
*`async` methods are now supported. (#81)
136
+
* Fixed an issue with more than one session being active at once. (#83)
137
+
*`ChatAnthropic()` no longer chokes after receiving an output that consists only of whitespace. (#86)
138
+
*`orjson` is now used for JSON loading and dumping. (#87)
139
+
140
+
### Changes
141
+
142
+
* The `echo` argument of the `.chat()` method defaults to a new value of `"output"`. As a result, tool requests and results are now echoed by default. To revert to the previous behavior, set `echo="text"`. (#78)
143
+
* Tool results are now dumped to JSON by default before being sent to the model. To revert to the previous behavior, have the tool return a `ContentToolResult` with `model_format="str"`. (#87)
144
+
145
+
### Breaking changes
146
+
147
+
* The `.export()` method's `include` argument has been renamed to `content` (to match `.stream()`). (#75)
148
+
149
+
## [0.6.1] - 2025-04-03
150
+
151
+
### Bug fixes
152
+
153
+
* Fixed a missing dependency on the `requests` package.
154
+
155
+
## [0.6.0] - 2025-04-01
156
+
157
+
### New features
158
+
159
+
* New `content_pdf_file()` and `content_pdf_url()` allow you to upload PDFs to supported models. (#74)
160
+
161
+
### Improvements
162
+
163
+
*`Turn` and `Content` now inherit from `pydantic.BaseModel` to provide easier saving to and loading from JSON. (#72)
164
+
165
+
## [0.5.0] - 2025-03-18
166
+
167
+
### New features
168
+
169
+
* Added a `ChatSnowflake()` class to interact with [Snowflake Cortex LLM](https://docs.snowflake.com/en/user-guide/snowflake-cortex/llm-functions). (#54)
170
+
* Added a `ChatAuto()` class, allowing for configuration of chat providers and models via environment variables. (#38, thanks @mconflitti-pbc)
171
+
172
+
### Improvements
173
+
174
+
* Updated `ChatAnthropic()`'s `model` default to `"claude-3-7-sonnet-latest"`. (#62)
175
+
* The version is now accessible as `chatlas.__version__`. (#64)
176
+
* All provider-specific `Chat` subclasses now have an associated extras in chatlas. For example, `ChatOpenAI` has `chatlas[openai]`, `ChatPerplexity` has `chatlas[perplexity]`, `ChatBedrockAnthropic` has `chatlas[bedrock-anthropic]`, and so forth for the other `Chat` classes. (#66)
177
+
178
+
### Bug fixes
179
+
180
+
* Fixed an issue with content getting duplicated when it overflows in a `Live()` console. (#71)
181
+
* Fix an issue with tool calls not working with `ChatVertex()`. (#61)
182
+
183
+
184
+
## [0.4.0] - 2025-02-19
185
+
186
+
### New features
187
+
188
+
* Added a `ChatVertex()` class to interact with Google Cloud's Vertex AI. (#50)
189
+
* Added `.app(*, echo=)` support. This allows for chatlas to change the echo behavior when running the Shiny app. (#31)
190
+
191
+
### Improvements
192
+
193
+
* Migrated `ChatGoogle()`'s underlying python SDK from `google-generative` to `google-genai`. As a result, streaming tools are now working properly. (#50)
194
+
195
+
### Bug fixes
196
+
197
+
* Fixed a bug where synchronous chat tools would not work properly when used in a `_async()` context. (#56)
198
+
* Fix broken `Chat`'s Shiny app when `.app(*, stream=True)` by using async chat tools. (#31)
199
+
* Update formatting of exported markdown to use `repr()` instead of `str()` when exporting tool call results. (#30)
200
+
201
+
## [0.3.0] - 2024-12-20
202
+
203
+
### New features
204
+
205
+
*`Chat`'s `.tokens()` method gains a `values` argument. Set it to `"discrete"` to get a result that can be summed to determine the token cost of submitting the current turns. The default (`"cumulative"`), remains the same (the result can be summed to determine the overall token cost of the conversation).
206
+
*`Chat` gains a `.token_count()` method to help estimate token cost of new input. (#23)
15
207
16
208
### Bug fixes
17
209
18
210
*`ChatOllama` no longer fails when a `OPENAI_API_KEY` environment variable is not set.
19
211
*`ChatOpenAI` now correctly includes the relevant `detail` on `ContentImageRemote()` input.
212
+
*`ChatGoogle` now correctly logs its `token_usage()`. (#23)
20
213
21
214
22
215
## [0.2.0] - 2024-12-11
23
216
24
-
First stable release of `chatlas`, see the website to learn more <https://posit-dev.github.io/chatlas/>
217
+
First stable release of `chatlas`, see the website to learn more <https://posit-dev.github.io/chatlas/>
0 commit comments