Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
c5c8292
chore(internal): fix incremental formatting in some cases
stainless-app[bot] Sep 27, 2025
34da720
chore(internal): codegen related update
stainless-app[bot] Sep 27, 2025
252e0a2
chore(internal): codegen related update
stainless-app[bot] Sep 27, 2025
b5432de
feat(api): removing openai/v1
stainless-app[bot] Sep 27, 2025
a0b0fb7
feat(api): expires_after changes for /files
stainless-app[bot] Sep 30, 2025
367d775
feat(api)!: fixes to remove deprecated inference resources
stainless-app[bot] Sep 30, 2025
f1cf9d6
feat(api): updating post /v1/files to have correct multipart/form-data
stainless-app[bot] Sep 30, 2025
17b9eb3
docs: update examples
stainless-app[bot] Sep 30, 2025
a38809d
codegen metadata
stainless-app[bot] Sep 30, 2025
b0676c8
feat(api): SDKs for vector store file batches
stainless-app[bot] Sep 30, 2025
88731bf
feat(api): SDKs for vector store file batches apis
stainless-app[bot] Sep 30, 2025
793e069
feat(api): moving { rerank, agents } to `client.alpha.`
stainless-app[bot] Sep 30, 2025
a71b421
fix: fix stream event model reference
stainless-app[bot] Sep 30, 2025
aec1d5f
feat(api): move post_training and eval under alpha namespace
stainless-app[bot] Sep 30, 2025
25a0f10
feat(api): fix file batches SDK to list_files
stainless-app[bot] Sep 30, 2025
8910a12
feat(api)!: use input_schema instead of parameters for tools
stainless-app[bot] Oct 1, 2025
06f2bca
feat(api): tool api (input_schema, etc.) changes
stainless-app[bot] Oct 2, 2025
5cee3d6
fix(api): fix the ToolDefParam updates
stainless-app[bot] Oct 2, 2025
e4f7840
feat(api): fixes to URLs
stainless-app[bot] Oct 2, 2025
6acae91
fix(api): another fix to capture correct responses.create() params
stainless-app[bot] Oct 2, 2025
a246793
chore(internal): use npm pack for build uploads
stainless-app[bot] Oct 7, 2025
dcc7bb8
chore: extract some types in mcp docs
stainless-app[bot] Oct 9, 2025
e0728d5
feat(api): several updates including Conversations, Responses changes…
stainless-app[bot] Oct 10, 2025
b521df1
codegen metadata
stainless-app[bot] Oct 10, 2025
19535c2
feat(api): updates to vector_store, etc.
stainless-app[bot] Oct 13, 2025
f32c0be
feat(api): move datasets to beta, vector_db -> vector_store
stainless-app[bot] Oct 20, 2025
4d5517c
chore: fix readme examples
stainless-app[bot] Oct 20, 2025
402f930
chore: fix readme example
stainless-app[bot] Oct 20, 2025
c6fb0b6
feat(api): manual updates
stainless-app[bot] Oct 20, 2025
98a596f
feat(api): manual updates
stainless-app[bot] Oct 20, 2025
257285f
fix(client): incorrect offset pagination check
stainless-app[bot] Oct 21, 2025
7d85013
feat(api): sync
stainless-app[bot] Oct 21, 2025
0302d54
feat(api): manual updates
stainless-app[bot] Oct 22, 2025
7d2e375
feat(api): manual updates
stainless-app[bot] Oct 22, 2025
079d89d
feat(api): vector_db_id -> vector_store_id
stainless-app[bot] Oct 27, 2025
e30f51c
chore(api)!: /v1/inspect only lists v1 apis by default
stainless-app[bot] Oct 29, 2025
ae3dc95
chore(api)!: /v1/inspect only lists v1 apis by default
stainless-app[bot] Oct 29, 2025
4dda064
feat(api): manual updates??!
stainless-app[bot] Oct 29, 2025
5ab8d74
feat(api): Adding prompts API to stainless config
stainless-app[bot] Oct 30, 2025
a51d3da
release: 0.4.0-alpha.1
stainless-app[bot] Oct 31, 2025
87d9d69
Merge origin/main into release branch
ashwinb Oct 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,7 @@
"postCreateCommand": "yarn install",
"customizations": {
"vscode": {
"extensions": [
"esbenp.prettier-vscode"
]
"extensions": ["esbenp.prettier-vscode"]
}
}
}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ dist
dist-deno
/*.tgz
.idea/
.eslintcache

2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.2.23-alpha.1"
".": "0.4.0-alpha.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-f252873ea1e1f38fd207331ef2621c511154d5be3f4076e59cc15754fc58eee4.yml
openapi_spec_hash: 10cbb4337a06a9fdd7d08612dd6044c3
config_hash: 0358112cc0f3d880b4d55debdbe1cfa3
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-35c6569e5e9fcc85084c9728eb7fc7c5908297fcc77043d621d25de3c850a990.yml
openapi_spec_hash: 0f95bbeee16f3205d36ec34cfa62c711
config_hash: ef275cc002a89629459fd73d0cf9cba9
68 changes: 68 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,73 @@
# Changelog

## 0.4.0-alpha.1 (2025-10-31)

Full Changelog: [v0.2.23-alpha.1...v0.4.0-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.23-alpha.1...v0.4.0-alpha.1)

### ⚠ BREAKING CHANGES

* **api:** /v1/inspect only lists v1 apis by default
* **api:** /v1/inspect only lists v1 apis by default
* **api:** use input_schema instead of parameters for tools
* **api:** fixes to remove deprecated inference resources

### Features

* **api:** Adding prompts API to stainless config ([5ab8d74](https://github.com/llamastack/llama-stack-client-typescript/commit/5ab8d7423f6a9c26453b36c9daee99d343993d4b))
* **api:** expires_after changes for /files ([a0b0fb7](https://github.com/llamastack/llama-stack-client-typescript/commit/a0b0fb7aa74668f3f6996c178f9654723b8b0f22))
* **api:** fix file batches SDK to list_files ([25a0f10](https://github.com/llamastack/llama-stack-client-typescript/commit/25a0f10cffa7de7f1457d65c97259911bc70ab0a))
* **api:** fixes to remove deprecated inference resources ([367d775](https://github.com/llamastack/llama-stack-client-typescript/commit/367d775c3d5a2fd85bf138d2b175e91b7c185913))
* **api:** fixes to URLs ([e4f7840](https://github.com/llamastack/llama-stack-client-typescript/commit/e4f78407f74f3ba7597de355c314e1932dd94761))
* **api:** manual updates ([7d2e375](https://github.com/llamastack/llama-stack-client-typescript/commit/7d2e375bde7bd04ae58cc49fcd5ab7b134b25640))
* **api:** manual updates ([0302d54](https://github.com/llamastack/llama-stack-client-typescript/commit/0302d54398d87127ab0e9221a8a92760123d235b))
* **api:** manual updates ([98a596f](https://github.com/llamastack/llama-stack-client-typescript/commit/98a596f677fe2790e4b4765362aa19b6cff8b97e))
* **api:** manual updates ([c6fb0b6](https://github.com/llamastack/llama-stack-client-typescript/commit/c6fb0b67d8f2e641c13836a17400e51df0b029f1))
* **api:** manual updates??! ([4dda064](https://github.com/llamastack/llama-stack-client-typescript/commit/4dda06489f003860e138f396c253b40de01103b6))
* **api:** move datasets to beta, vector_db -> vector_store ([f32c0be](https://github.com/llamastack/llama-stack-client-typescript/commit/f32c0becb1ec0d66129b7fcaa06de3323ee703da))
* **api:** move post_training and eval under alpha namespace ([aec1d5f](https://github.com/llamastack/llama-stack-client-typescript/commit/aec1d5ff198473ba736bf543ad00c6626cab9b81))
* **api:** moving { rerank, agents } to `client.alpha.` ([793e069](https://github.com/llamastack/llama-stack-client-typescript/commit/793e0694d75c2af4535bf991d5858cd1f21300b4))
* **api:** removing openai/v1 ([b5432de](https://github.com/llamastack/llama-stack-client-typescript/commit/b5432de2ad56ff0d2fd5a5b8e1755b5237616b60))
* **api:** SDKs for vector store file batches ([b0676c8](https://github.com/llamastack/llama-stack-client-typescript/commit/b0676c837bbd835276fea3fe12f435afdbb75ef7))
* **api:** SDKs for vector store file batches apis ([88731bf](https://github.com/llamastack/llama-stack-client-typescript/commit/88731bfecd6f548ae79cbe2a1125620e488c42a3))
* **api:** several updates including Conversations, Responses changes, etc. ([e0728d5](https://github.com/llamastack/llama-stack-client-typescript/commit/e0728d5dd59be8723d9f967d6164351eb05528d1))
* **api:** sync ([7d85013](https://github.com/llamastack/llama-stack-client-typescript/commit/7d850139d1327a215312a82c98b3428ebc7e5f68))
* **api:** tool api (input_schema, etc.) changes ([06f2bca](https://github.com/llamastack/llama-stack-client-typescript/commit/06f2bcaf0df2e5d462cbe2d9ef3704ab0cfe9248))
* **api:** updates to vector_store, etc. ([19535c2](https://github.com/llamastack/llama-stack-client-typescript/commit/19535c27147bf6f6861b807d9eeee471b5625148))
* **api:** updating post /v1/files to have correct multipart/form-data ([f1cf9d6](https://github.com/llamastack/llama-stack-client-typescript/commit/f1cf9d68b6b2569dfb5ea3e2d2c33eff1a832e47))
* **api:** use input_schema instead of parameters for tools ([8910a12](https://github.com/llamastack/llama-stack-client-typescript/commit/8910a121146aeddcb8f400101e6a2232245097e0))
* **api:** vector_db_id -> vector_store_id ([079d89d](https://github.com/llamastack/llama-stack-client-typescript/commit/079d89d6522cb4f2eed5e5a09962d94ad800e883))


### Bug Fixes

* **api:** another fix to capture correct responses.create() params ([6acae91](https://github.com/llamastack/llama-stack-client-typescript/commit/6acae910db289080e8f52864f1bdf6d7951d1c3b))
* **api:** fix the ToolDefParam updates ([5cee3d6](https://github.com/llamastack/llama-stack-client-typescript/commit/5cee3d69650a4c827e12fc046c1d2ec3b2fa9126))
* **client:** incorrect offset pagination check ([257285f](https://github.com/llamastack/llama-stack-client-typescript/commit/257285f33bb989c9040580dd24251d05f9657bb0))
* fix stream event model reference ([a71b421](https://github.com/llamastack/llama-stack-client-typescript/commit/a71b421152a609e49e76d01c6e4dd46eb3dbfae0))


### Chores

* **api:** /v1/inspect only lists v1 apis by default ([ae3dc95](https://github.com/llamastack/llama-stack-client-typescript/commit/ae3dc95964c908d219b23d7166780eaab6003ef5))
* **api:** /v1/inspect only lists v1 apis by default ([e30f51c](https://github.com/llamastack/llama-stack-client-typescript/commit/e30f51c704c39129092255c040bbf5ad90ed0b07))
* extract some types in mcp docs ([dcc7bb8](https://github.com/llamastack/llama-stack-client-typescript/commit/dcc7bb8b4d940982c2e9c6d1a541636e99fdc5ff))
* fix readme example ([402f930](https://github.com/llamastack/llama-stack-client-typescript/commit/402f9301d033bb230c9714104fbfa554f3f7cd8f))
* fix readme examples ([4d5517c](https://github.com/llamastack/llama-stack-client-typescript/commit/4d5517c2b9af2eb6994f5e4b2c033c95d268fb5c))
* **internal:** codegen related update ([252e0a2](https://github.com/llamastack/llama-stack-client-typescript/commit/252e0a2a38bd8aedab91b401c440a9b10c056cec))
* **internal:** codegen related update ([34da720](https://github.com/llamastack/llama-stack-client-typescript/commit/34da720c34c35dafb38775243d28dfbdce2497db))
* **internal:** fix incremental formatting in some cases ([c5c8292](https://github.com/llamastack/llama-stack-client-typescript/commit/c5c8292b631c678efff5498bbab9f5a43bee50b6))
* **internal:** use npm pack for build uploads ([a246793](https://github.com/llamastack/llama-stack-client-typescript/commit/a24679300cff93fea8ad4bc85e549ecc88198d58))


### Documentation

* update examples ([17b9eb3](https://github.com/llamastack/llama-stack-client-typescript/commit/17b9eb3c40957b63d2a71f7fc21944abcc720d80))


### Build System

* Bump version to 0.2.23 ([16e05ed](https://github.com/llamastack/llama-stack-client-typescript/commit/16e05ed9798233375e19098992632d223c3f5d8d))

## 0.2.23-alpha.1 (2025-09-26)

Full Changelog: [v0.2.19-alpha.1...v0.2.23-alpha.1](https://github.com/llamastack/llama-stack-client-typescript/compare/v0.2.19-alpha.1...v0.2.23-alpha.1)
Expand Down
36 changes: 18 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@ import LlamaStackClient from 'llama-stack-client';

const client = new LlamaStackClient();

const stream = await client.inference.chatCompletion({
const stream = await client.chat.completions.create({
messages: [{ content: 'string', role: 'user' }],
model_id: 'model_id',
model: 'model',
stream: true,
});
for await (const chatCompletionResponseStreamChunk of stream) {
console.log(chatCompletionResponseStreamChunk.completion_message);
for await (const chatCompletionChunk of stream) {
console.log(chatCompletionChunk);
}
```

Expand All @@ -64,11 +64,11 @@ import LlamaStackClient from 'llama-stack-client';

const client = new LlamaStackClient();

const params: LlamaStackClient.InferenceChatCompletionParams = {
const params: LlamaStackClient.Chat.CompletionCreateParams = {
messages: [{ content: 'string', role: 'user' }],
model_id: 'model_id',
model: 'model',
};
const chatCompletionResponse: LlamaStackClient.ChatCompletionResponse = await client.inference.chatCompletion(
const completion: LlamaStackClient.Chat.CompletionCreateResponse = await client.chat.completions.create(
params,
);
```
Expand Down Expand Up @@ -113,8 +113,8 @@ a subclass of `APIError` will be thrown:

<!-- prettier-ignore -->
```ts
const chatCompletionResponse = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const completion = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.catch(async (err) => {
if (err instanceof LlamaStackClient.APIError) {
console.log(err.status); // 400
Expand Down Expand Up @@ -155,7 +155,7 @@ const client = new LlamaStackClient({
});

// Or, configure per-request:
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
maxRetries: 5,
});
```
Expand All @@ -172,7 +172,7 @@ const client = new LlamaStackClient({
});

// Override per-request:
await client.inference.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' }, {
await client.chat.completions.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' }, {
timeout: 5 * 1000,
});
```
Expand All @@ -193,17 +193,17 @@ You can also use the `.withResponse()` method to get the raw `Response` along wi
```ts
const client = new LlamaStackClient();

const response = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const response = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object

const { data: chatCompletionResponse, response: raw } = await client.inference
.chatCompletion({ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' })
const { data: completion, response: raw } = await client.chat.completions
.create({ messages: [{ content: 'string', role: 'user' }], model: 'model' })
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(chatCompletionResponse.completion_message);
console.log(completion);
```

### Making custom/undocumented requests
Expand Down Expand Up @@ -307,8 +307,8 @@ const client = new LlamaStackClient({
});

// Override per-request:
await client.inference.chatCompletion(
{ messages: [{ content: 'string', role: 'user' }], model_id: 'model_id' },
await client.chat.completions.create(
{ messages: [{ content: 'string', role: 'user' }], model: 'model' },
{
httpAgent: new http.Agent({ keepAlive: false }),
},
Expand Down
Loading