Make DEVICE_BATCH_SIZE and TOTAL_BATCH_SIZE configurable via env vars#402
Open
abhayapattanaik wants to merge 1 commit intokarpathy:masterfrom
Open
Make DEVICE_BATCH_SIZE and TOTAL_BATCH_SIZE configurable via env vars#402abhayapattanaik wants to merge 1 commit intokarpathy:masterfrom
abhayapattanaik wants to merge 1 commit intokarpathy:masterfrom
Conversation
These values are currently hardcoded for H100 80GB GPUs. Cloud GPUs with
less VRAM (A10G 24GB, T4 16GB) OOM with the defaults.
This change allows overriding via environment variables while keeping the
same defaults, so existing behavior is unchanged:
DEVICE_BATCH_SIZE=32 TOTAL_BATCH_SIZE=131072 uv run train.py
Motivation: tools like autoresearch-anywhere that run autoresearch on
various cloud GPUs currently have to sed-patch these values before
training. Environment variables are cleaner and more composable.
abhayapattanaik
added a commit
to abhayapattanaik/autoresearch-anycloud
that referenced
this pull request
Mar 24, 2026
Link to karpathy/autoresearch#402 in README GPU compatibility note. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Agolid
pushed a commit
to Agolid/autoresearch
that referenced
this pull request
Mar 27, 2026
…v vars - Allow override via environment variables - Defaults unchanged — existing behavior identical - Useful for cloud GPUs with less VRAM (A10G, T4, etc.) - Usage: DEVICE_BATCH_SIZE=32 TOTAL_BATCH_SIZE=131072 uv run train.py Fixes karpathy#402
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
DEVICE_BATCH_SIZEandTOTAL_BATCH_SIZEintrain.pycan now be overridden via environment variablesMotivation
These values are hardcoded for H100 80GB GPUs. Cloud GPUs with less VRAM (A10G 24GB, T4 16GB) OOM with the defaults. Tools that run autoresearch on various cloud GPUs currently have to
sed-patch these values before training. Environment variables are cleaner:Test plan
DEVICE_BATCH_SIZE=8— gradient accumulation steps changed from 2 to 4 as expected, training completed successfully