-
Notifications
You must be signed in to change notification settings - Fork 370
feat: improve engine caching and fix bugs #3932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| """ | ||
|
|
||
| def _insert_engine_to_cache( | ||
| hash_val: str, interpreter_result: TRTInterpreterResult |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, I like this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason that the function needs to be in the interpret functions scope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a specific reason, but I just don't know when the engine_cache will be used other than in the function interpret_module_to_result(). To make it safe and self-contained, I picked the smallest scope. Is there any other cases that might use engine_cache?
| logger.info(f"Engine was successfully inserted into cache for hash: {hash_val}") | ||
|
|
||
| @needs_refit # type: ignore[misc] | ||
| def _pull_cached_engine(hash_val: str) -> Optional[SerializedInterpreterResult]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
| """ | ||
|
|
||
| def _insert_engine_to_cache( | ||
| hash_val: str, interpreter_result: TRTInterpreterResult |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason that the function needs to be in the interpret functions scope?
| ) | ||
| logger.info(f"Engine was successfully inserted into cache for hash: {hash_val}") | ||
|
|
||
| @needs_refit # type: ignore[misc] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the insert and extract both be needs refit?
Also shouldnt this gracefully pass through vs the typically unimplemented error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the insert and extract both be needs refit?
insert seems not to involve any refitting stuff. It supports a scenario that users insert engines on machine A that doesn't support refit but pull engines on machine B that supports refit. Please correct me if wrong.
Also shouldnt this gracefully pass through vs the typically unimplemented error?
Not sure if I understand your question correctly. The reason why we need refit in pull is that we save weight-stripped engine in this implementation, which needs to be refitted to get correct weights before using.
Description
As I requested, TensorRT 10.14 added an argument
trt.SerializationFlag.INCLUDE_REFITto allow refitted engines to keep refittable. That means engines can be refitted multiple times. Based on the capability, this PR enhances the existing engine caching and refitting features as follows:compilation_settings.strip_engine_weights. Then, when users pull out the cached engine, it will be automatically refitted and kept refittable.refit_module_weights(). e.g.:_conversion.py.Type of change
Checklist: