-
Notifications
You must be signed in to change notification settings - Fork 281
update scout example #2310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update scout example #2310
Conversation
Signed-off-by: Mengni Wang <[email protected]>
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
Signed-off-by: Mengni Wang <[email protected]>
|
@chensuyue please check the updated example results |
examples/pytorch/multimodal-modeling/quantization/auto_round/llama4/requirements.txt
Outdated
Show resolved
Hide resolved
examples/pytorch/multimodal-modeling/quantization/auto_round/llama4/run_quant.sh
Outdated
Show resolved
Hide resolved
examples/pytorch/multimodal-modeling/quantization/auto_round/llama4/requirements.txt
Outdated
Show resolved
Hide resolved
Signed-off-by: Mengni Wang <[email protected]>
User description
Type of Change
update example
Description
detail description
Expected Behavior & Potential Risk
the expected behavior that triggered by this PR
How has this PR been tested?
how to reproduce the test (including hardware information)
Dependency Change?
any library dependency introduced or removed
PR Type
Enhancement
Description
Added new
main.pyscript for Llama4 quantizationUpdated
run_quant.shto usemain.pyAdded
neural-compressordependencyDiagram Walkthrough
File Walkthrough
main.py
Add Llama4 Quantization Scriptexamples/pytorch/multimodal-modeling/quantization/auto_round/llama4/main.py
run_quant.sh
Update run_quant.sh to Use main.pyexamples/pytorch/multimodal-modeling/quantization/auto_round/llama4/run_quant.sh
main.pyinstead ofauto_roundrequirements.txt
Add neural-compressor Dependencyexamples/pytorch/multimodal-modeling/quantization/auto_round/llama4/requirements.txt
neural-compressordependency