Replies: 2 comments 2 replies
-
Hi @Alx-nder did you just fit the model on GPU? If so there is no need for that. Skip the saving and just fit and do inference on the CPU device you would not get any performance penalty. |
Beta Was this translation helpful? Give feedback.
1 reply
-
What do you mean by training? If you just fit the model having a GPU or not is irrelevant it would simply create the preprocessing pipeline and preprocess the training data (depending on the |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am having difficulty deploying the model on a CPU-only device. I trained on GPU-enabled device then saved models as joblib. When I try to load and run on CPU device, i get the error message: "AssertionError: Torch not compiled with CUDA enabled". Is there something I could do to fix this?
Beta Was this translation helpful? Give feedback.
All reactions