Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ java {

group = 'com.cohere'

version = '1.8.1'
version = '1.8.0'

jar {
dependsOn(":generatePomFileForMavenPublication")
Expand Down Expand Up @@ -77,7 +77,7 @@ publishing {
maven(MavenPublication) {
groupId = 'com.cohere'
artifactId = 'cohere-java'
version = '1.8.1'
version = '1.8.0'
from components.java
pom {
name = 'cohere'
Expand Down
113 changes: 52 additions & 61 deletions reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,19 +57,6 @@ client.chatStream(
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**message:** `String`

Text input for the model to respond to.
Expand Down Expand Up @@ -371,6 +358,19 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**tools:** `Optional<List<Tool>>`

A list of available tools (functions) that the model may suggest invoking before producing a text response.
Expand Down Expand Up @@ -541,19 +541,6 @@ client.chatStream(
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**message:** `String`

Text input for the model to respond to.
Expand Down Expand Up @@ -855,6 +842,19 @@ Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private D
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**tools:** `Optional<List<Tool>>`

A list of available tools (functions) that the model may suggest invoking before producing a text response.
Expand Down Expand Up @@ -2291,19 +2291,6 @@ When set to `true`, tool calls in the Assistant message will be forced to follow
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**responseFormat:** `Optional<ResponseFormatV2>`

</dd>
Expand Down Expand Up @@ -2331,9 +2318,11 @@ Safety modes are not yet configurable in combination with `tools` and `documents

**maxTokens:** `Optional<Integer>`

The maximum number of tokens the model will generate as part of the response.
The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).

**Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.

**Note**: Setting a low value may result in incomplete generations.
**Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.

</dd>
</dl>
Expand Down Expand Up @@ -2435,8 +2424,14 @@ When `NONE` is specified, the model will be forced **not** to use one of the spe
If tool_choice isn't specified, then the model is free to choose whether to use the specified tools or not.

**Note**: This parameter is only compatible with models [Command-r7b](https://docs.cohere.com/v2/docs/command-r7b) and newer.

</dd>
</dl>

<dl>
<dd>

**Note**: The same functionality can be achieved in `/v1/chat` using the `force_single_step` parameter. If `force_single_step=true`, this is equivalent to specifying `REQUIRED`. While if `force_single_step=true` and `tool_results` are passed, this is equivalent to specifying `NONE`.
**thinking:** `Optional<Thinking>`

</dd>
</dl>
Expand Down Expand Up @@ -2582,19 +2577,6 @@ When set to `true`, tool calls in the Assistant message will be forced to follow
<dl>
<dd>

**rawPrompting:** `Optional<Boolean>`

When enabled, the user's prompt will be sent to the model without
any pre-processing.

Compatible Deployments: Cohere Platform, Azure, AWS Sagemaker/Bedrock, Private Deployments

</dd>
</dl>

<dl>
<dd>

**responseFormat:** `Optional<ResponseFormatV2>`

</dd>
Expand Down Expand Up @@ -2622,9 +2604,11 @@ Safety modes are not yet configurable in combination with `tools` and `documents

**maxTokens:** `Optional<Integer>`

The maximum number of tokens the model will generate as part of the response.
The maximum number of output tokens the model will generate in the response. If not set, `max_tokens` defaults to the model's maximum output token limit. You can find the maximum output token limits for each model in the [model documentation](https://docs.cohere.com/docs/models).

**Note**: Setting a low value may result in incomplete generations. In such cases, the `finish_reason` field in the response will be set to `"MAX_TOKENS"`.

**Note**: Setting a low value may result in incomplete generations.
**Note**: If `max_tokens` is set higher than the model's maximum output token limit, the generation will be capped at that model-specific maximum limit.

</dd>
</dl>
Expand Down Expand Up @@ -2726,8 +2710,14 @@ When `NONE` is specified, the model will be forced **not** to use one of the spe
If tool_choice isn't specified, then the model is free to choose whether to use the specified tools or not.

**Note**: This parameter is only compatible with models [Command-r7b](https://docs.cohere.com/v2/docs/command-r7b) and newer.

</dd>
</dl>

**Note**: The same functionality can be achieved in `/v1/chat` using the `force_single_step` parameter. If `force_single_step=true`, this is equivalent to specifying `REQUIRED`. While if `force_single_step=true` and `tool_results` are passed, this is equivalent to specifying `NONE`.
<dl>
<dd>

**thinking:** `Optional<Thinking>`

</dd>
</dl>
Expand Down Expand Up @@ -2875,6 +2865,7 @@ Specifies the types of embeddings you want to get back. Can be one or more of th
* `"uint8"`: Use this when you want to get back unsigned int8 embeddings. Supported with Embed v3.0 and newer Embed models.
* `"binary"`: Use this when you want to get back signed binary embeddings. Supported with Embed v3.0 and newer Embed models.
* `"ubinary"`: Use this when you want to get back unsigned binary embeddings. Supported with Embed v3.0 and newer Embed models.
* `"base64"`: Use this when you want to get back base64 embeddings. Supported with Embed v3.0 and newer Embed models.

</dd>
</dl>
Expand Down Expand Up @@ -4365,17 +4356,17 @@ Creates a new fine-tuned model. The model will be trained on the dataset specifi
client.finetuning().createFinetunedModel(
FinetunedModel
.builder()
.name("api-test")
.name("name")
.settings(
Settings
.builder()
.baseModel(
BaseModel
.builder()
.baseType(BaseType.BASE_TYPE_CHAT)
.baseType(BaseType.BASE_TYPE_UNSPECIFIED)
.build()
)
.datasetId("my-dataset-id")
.datasetId("dataset_id")
.build()
)
.build()
Expand Down
12 changes: 6 additions & 6 deletions src/main/java/com/cohere/api/AsyncRawCohere.java
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,6 @@ public CompletableFuture<CohereHttpResponse<Iterable<StreamedChatResponse>>> cha
.addPathSegments("v1/chat")
.build();
Map<String, Object> properties = new HashMap<>();
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
properties.put("message", request.getMessage());
if (request.getModel().isPresent()) {
properties.put("model", request.getModel());
Expand Down Expand Up @@ -146,6 +143,9 @@ public CompletableFuture<CohereHttpResponse<Iterable<StreamedChatResponse>>> cha
if (request.getPresencePenalty().isPresent()) {
properties.put("presence_penalty", request.getPresencePenalty());
}
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
if (request.getTools().isPresent()) {
properties.put("tools", request.getTools());
}
Expand Down Expand Up @@ -299,9 +299,6 @@ public CompletableFuture<CohereHttpResponse<NonStreamedChatResponse>> chat(
.addPathSegments("v1/chat")
.build();
Map<String, Object> properties = new HashMap<>();
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
properties.put("message", request.getMessage());
if (request.getModel().isPresent()) {
properties.put("model", request.getModel());
Expand Down Expand Up @@ -358,6 +355,9 @@ public CompletableFuture<CohereHttpResponse<NonStreamedChatResponse>> chat(
if (request.getPresencePenalty().isPresent()) {
properties.put("presence_penalty", request.getPresencePenalty());
}
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
if (request.getTools().isPresent()) {
properties.put("tools", request.getTools());
}
Expand Down
12 changes: 6 additions & 6 deletions src/main/java/com/cohere/api/RawCohere.java
Original file line number Diff line number Diff line change
Expand Up @@ -83,9 +83,6 @@ public CohereHttpResponse<Iterable<StreamedChatResponse>> chatStream(
.addPathSegments("v1/chat")
.build();
Map<String, Object> properties = new HashMap<>();
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
properties.put("message", request.getMessage());
if (request.getModel().isPresent()) {
properties.put("model", request.getModel());
Expand Down Expand Up @@ -142,6 +139,9 @@ public CohereHttpResponse<Iterable<StreamedChatResponse>> chatStream(
if (request.getPresencePenalty().isPresent()) {
properties.put("presence_penalty", request.getPresencePenalty());
}
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
if (request.getTools().isPresent()) {
properties.put("tools", request.getTools());
}
Expand Down Expand Up @@ -256,9 +256,6 @@ public CohereHttpResponse<NonStreamedChatResponse> chat(ChatRequest request, Req
.addPathSegments("v1/chat")
.build();
Map<String, Object> properties = new HashMap<>();
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
properties.put("message", request.getMessage());
if (request.getModel().isPresent()) {
properties.put("model", request.getModel());
Expand Down Expand Up @@ -315,6 +312,9 @@ public CohereHttpResponse<NonStreamedChatResponse> chat(ChatRequest request, Req
if (request.getPresencePenalty().isPresent()) {
properties.put("presence_penalty", request.getPresencePenalty());
}
if (request.getRawPrompting().isPresent()) {
properties.put("raw_prompting", request.getRawPrompting());
}
if (request.getTools().isPresent()) {
properties.put("tools", request.getTools());
}
Expand Down
4 changes: 2 additions & 2 deletions src/main/java/com/cohere/api/core/ClientOptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ private ClientOptions(
this.headers.putAll(headers);
this.headers.putAll(new HashMap<String, String>() {
{
put("User-Agent", "com.cohere:cohere-java/1.8.1");
put("User-Agent", "com.cohere:cohere-java/1.8.0");
put("X-Fern-Language", "JAVA");
put("X-Fern-SDK-Name", "com.cohere.fern:api-sdk");
put("X-Fern-SDK-Version", "1.8.1");
put("X-Fern-SDK-Version", "1.8.0");
}
});
this.headerSuppliers = headerSuppliers;
Expand Down
Loading