-
Notifications
You must be signed in to change notification settings - Fork 13.3k
server : host-memory prompt caching #16391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ggerganov
wants to merge
15
commits into
master
Choose a base branch
from
gg/prompt-cache-ext
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+758
−448
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
0787f03
to
5c0cec4
Compare
This comment was marked as spam.
This comment was marked as spam.
5c0cec4
to
1440ec5
Compare
3 tasks
9de8392
to
cf7dd4b
Compare
65e8991
to
264d2c3
Compare
I've been testing this with Claude Code and Codex and haven't spotted any issues. After a few more rounds of testing today, planning to merge. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
target #16440
rel #16117
Initial version of automatic memory offloading to host memory using an extended logic for minimizing the prompt reprocessing. The host-memory prompt cache acts as "extra slots" with which we can calculate prefix similarity and decide to hot-swap them into the
llama_context
if it would reduce the processing. The cache is stored in regular RAM.The RAM size that is used for caching prompts has 2 limits:
--cache-ram
CLI arg)--context-size
)The server logs provide detailed prompt cache information each time the cache is updated:
A small QoL improvement is that
update_slots()
now also logs the old and new prompt for each task aroundn_past
(up to 10 tokens) so we can have a better understanding what caused the particular choice of then_past
value for the new task.Note: mtmd workarounds are starting to cause some headaches. For example
server_tokens
is not copyable which complicates the cache logic and makes the prompt caching feature incompatible with mtmd.Server refactor
server_slot
members with a singleserver_task
server_slot.n_predict
slot.task
is nowconst ptr
to reflect that the task parameters should not change when it is passed to the slotTODOs