-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Fixed CUDA-bound error where GPU-enabled run would result in incorrect model loading #1458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's guide (collapsed on small PRs)Reviewer's GuideEnsures the full-precision face swap model is used for CUDA execution to eliminate garbled output by replacing the FP16 model reference with the correct ONNX model. Sequence diagram for face swapper model loading with CUDA providersequenceDiagram
participant User
participant FaceSwapper
participant ModelZoo
User->>FaceSwapper: get_face_swapper()
FaceSwapper->>FaceSwapper: Check execution_providers
FaceSwapper->>FaceSwapper: Set model_name = "inswapper_128.onnx"
FaceSwapper->>ModelZoo: get_model(model_path, providers)
ModelZoo-->>FaceSwapper: Return loaded model
FaceSwapper-->>User: Return FACE_SWAPPER
Class diagram for updated face swapper model selectionclassDiagram
class FaceSwapper
FaceSwapper : +get_face_swapper()
FaceSwapper : -FACE_SWAPPER
FaceSwapper : -model_name
FaceSwapper : -model_path
FaceSwapper : -models_dir
FaceSwapper : -execution_providers
FaceSwapper : Loads "inswapper_128.onnx" for all providers
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- Since inswapper_128.onnx is now used for all providers, you can remove the CUDAExecutionProvider conditional and assign model_name just once.
- Consider removing or archiving the unused inswapper_128_fp16.onnx model file from the repo to keep the models directory clean.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Since inswapper_128.onnx is now used for all providers, you can remove the CUDAExecutionProvider conditional and assign model_name just once.
- Consider removing or archiving the unused inswapper_128_fp16.onnx model file from the repo to keep the models directory clean.
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Thank you for the fix! |
whoops, that was me committing to the wrong branch just for my personal tinkering. i'll get that fixed now |
e36cf4c
to
03576d5
Compare
@Meehey Let me know if that didn't fix it and I can figure it out further, but for me looks like I've reverted it back to just the one line change. Thank you! |
This should work. |
Fix CUDA Face Garbling Issue
Fixed face garbling issues that occurred when running Deep-Live-Cam with CUDA execution provider by ensuring the correct face swap model is loaded.
Changes Made
modules/processors/frame/face_swapper.py
Technical Details
The fix ensures the appropriate
inswapper_128.onnx
model is loaded. This prevents the face garbling that was occurring due to model incompatibility with CUDA-enabled runs.Testing
Impact
This change resolves the face garbling issue that users experienced when running with CUDA acceleration, while maintaining full compatibility with other execution providers.
Summary by Sourcery
Bug Fixes: