TensorRT-LLM 0.8.0 Release #1193
kaiyux
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
We are very pleased to announce the 0.8.0 version of TensorRT-LLM. It has been an intense effort, and we hope that it will enable you to easily deploy GPU-based inference for state-of-the-art LLMs. We want TensorRT-LLM to help you run those LLMs very fast.
This update includes:
temperatureparameter of sampling configuration should be 0repetition_penaltyandpresence_penaltySupport for combiningrepetition_penalty,presence_penalty#274frequency_penaltySupport forfrequency_penalty#275masked_selectandcumsumfunction for modelingLayerNormandRMSNormplugins and removed corresponding build parametersmaxNumSequencesfor GPT manager--gather_all_token_logitsis enabled When '--gather_all_token_logits' is enabled, the first token appears to be abnormal." #639gptManagerBenchmarkgptManagerBenchmark launch failed #649InferenceRequestGptManager pybind 2/4TP run demo #701freeGpuMemoryFractionparameter from 0.85 to 0.9 for higher throughputenable_trt_overlapargument for GPT manager by defaultdocs/source/new_workflow.mddocumentationCurrently, there are two key branches in the project:
We are updating the main branch regularly with new features, bug fixes and performance optimizations. The stable branch will be updated less frequently, and the exact frequencies depend on your feedback.
Thanks,
The TensorRT-LLM Engineering Team
This discussion was created from the release TensorRT-LLM 0.8.0 Release.
Beta Was this translation helpful? Give feedback.
All reactions