Skip to content

Conversation

@otterDeveloper
Copy link
Contributor

Adding new models availbe through Fireworks.

Pricing data taken from their model library.
Usure about the max output tokens, reused the context limit.

Comment on lines +13 to +18
input = 0.56
output = 1.68
cache_read = 0.28

[limit]
context = 160_000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this an interleaved thinking model?

Comment on lines +9 to +18
knowledge = "2025-04"
open_weights = true

[cost]
input = 0.60
output = 2.20
cache_read = 0.30

[limit]
context = 198_000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this interleaved thinking model?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is, but I dont know if it requires provider support. Does it "just work"? If that is the case kimi-k2-thinking is also missing interleaved thinking

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I based the definitions on the previous version of the models which don't document it either. should I just add it?

@rekram1-node
Copy link
Contributor

Yeah it looks like for fireworks it is reasoning_content do you mind adding to the fireworks models that didn't I have too?

Only models I think that'd be:

  • minimax (m2 and m2.1 << dont feel obligated to add if not there)
  • kimi k2 thinking
  • deepseek v3.2
  • glm 4.7

I think that's all of them for them

@otterDeveloper
Copy link
Contributor Author

I added interleaved thinking.
I also increased the output token limit for kimi k2 thinking and minimax m2, the previous one of 16,384 seems to be just the max value of their playground UI , in their API you can increase it further without request errors, but Im not interely certain, unclear whats the actual limit their support.

@rekram1-node rekram1-node merged commit b68935b into sst:dev Dec 24, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants