[Claimed #1844] Refactor model ID checks for GPT 5.x model family#1852
[Claimed #1844] Refactor model ID checks for GPT 5.x model family#1852github-actions[bot] wants to merge 3 commits intomainfrom
Conversation
|
This mirrored PR tracks external contributor PR #1844 at source SHA When the external PR gets new commits, this same internal PR will be refreshed in place after the latest external commit is approved. |
|
Greptile SummaryThis PR generalises the
Confidence Score: 4/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[modelId] --> B{includes 'gpt-5'?}
B -- No --> C[isGPT5 = false\nproviderOptions = undefined]
B -- Yes --> D{includes 'codex'?}
D -- Yes --> E[isCodex = true\nreasoningEffort = 'medium'\ntextVerbosity = 'medium']
D -- No --> F{includes 'gpt-5.' with dot?}
F -- Yes --> G["usesLowReasoningEffort = true\nreasoningEffort = 'low'\ntextVerbosity = 'low'"]
F -- No --> H["usesLowReasoningEffort = false\nreasoningEffort = 'minimal' ⚠️\ntextVerbosity = 'low'"]
H --> I["e.g. bare 'gpt-5' alias\n— still hits unsupported value"]
Last reviewed commit: "formatting" |
| const usesLowReasoningEffort = | ||
| (this.model.modelId.includes("gpt-5.1") || | ||
| this.model.modelId.includes("gpt-5.2")) && | ||
| !isCodex; | ||
| this.model.modelId.includes("gpt-5.") && !isCodex; | ||
| // Kimi models only support temperature=1 |
There was a problem hiding this comment.
Unversioned
gpt-5 alias falls through to "minimal"
The "gpt-5." substring check (with trailing dot) correctly captures all gpt-5.x versioned models (e.g. gpt-5.1, gpt-5.4). However, if OpenAI publishes an unversioned gpt-5 alias (without a decimal, similar to how gpt-4 exists alongside gpt-4.x), it would match isGPT5 but not usesLowReasoningEffort, causing it to fall through to "minimal" reasoning effort — the exact error this PR is trying to prevent.
Since the PR description states "All GPT-5.x series models don't support minimal", consider widening the guard to also cover a bare gpt-5 model:
| const usesLowReasoningEffort = | |
| (this.model.modelId.includes("gpt-5.1") || | |
| this.model.modelId.includes("gpt-5.2")) && | |
| !isCodex; | |
| this.model.modelId.includes("gpt-5.") && !isCodex; | |
| // Kimi models only support temperature=1 | |
| const usesLowReasoningEffort = | |
| (this.model.modelId.includes("gpt-5.") || | |
| this.model.modelId === "gpt-5") && | |
| !isCodex; |
Mirrored from external contributor PR #1844 after approval by @miguelg719.
Original author: @praveentcom
Original PR: #1844
Approved source head SHA:
a637dc329bfc5426bb71c8551c812191ed631527@praveentcom, please continue any follow-up discussion on this mirrored PR. When the external PR gets new commits, this same internal PR will be marked stale until the latest external commit is approved and refreshed here.
Original description
All GPT-5.x series models don't support
minimalas thereasoningEffort. Currently, it is enabled only for GPT-5.1 and GPT-5.2 models to set the reasoningEffort aslow. This would start throwing errors like these.Unsupported value: 'minimal' is not supported with the 'gpt-5.4' model. Supported values are: 'none', 'low', 'medium', 'high', and 'xhigh'.This PR fixes the behavior to set the reasoning effort as low for all GPT-5.x series models so that we don't need to manually patch it every time when a new SOTA model is released.
Summary by cubic
Set reasoningEffort to "low" for all GPT-5.x models by matching
gpt-5.and excludingcodex, preventing unsupported "minimal" errors on new releases likegpt-5.4. This removes the need for version-specific patches.Written for commit 55b44dc. Summary will update on new commits. Review in cubic