-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
Is there an existing issue for this problem?
- I have searched the existing issues
Install method
Invoke's Launcher
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
GTX 1060 Max-Q
GPU VRAM
6GB
Version number
lastest v1.7.1
Browser
edge
System Information
Started Invoke process with PID 28000
[2025-09-10 11:38:19,917]::[InvokeAI]::INFO --> Using torch device: NVIDIA GeForce GTX 1060 with Max-Q D
esign
[2025-09-10 11:38:21,004]::[InvokeAI]::INFO --> cuDNN version: 90701
[2025-09-10 11:38:23,652]::[InvokeAI]::INFO --> Patchmatch initialized
[2025-09-10 11:38:25,001]::[InvokeAI]::INFO --> InvokeAI version 6.6.0
[2025-09-10 11:38:25,002]::[InvokeAI]::INFO --> Root directory = C:\InvokeAI
[2025-09-10 11:38:25,006]::[InvokeAI]::INFO --> Initializing database at C:\InvokeAI\databases\invokeai.
db
[2025-09-10 11:38:26,247]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size
: 4096.00 MB. Heuristics applied: [1, 2, 3].
[2025-09-10 11:38:26,426]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090 (Press CTRL+C to
quit)
[2025-09-10 11:38:47,970]::[InvokeAI]::INFO --> Executing queue item 6, session 9113aa97-9fbf-4adf-8919-
c6852d6ae786
[2025-09-10 11:38:50,459]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a80985c8-7e1a-4ce
7-b398-229e757a0c93:text_encoder_2' (T5EncoderModel) onto cuda device in 1.78s. Total model size: 4922.1
4MB, VRAM: 1104.00MB (22.4%)
You set add_prefix_space
. The tokenizer needs to be converted from the slow tokenizers
[2025-09-10 11:38:50,767]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a80985c8-7e1a-4ce
7-b398-229e757a0c93:tokenizer_2' (T5TokenizerFast) onto cuda device in 0.00s. Total model size: 0.03MB,
VRAM: 0.00MB (0.0%)
C:\InvokeAI.venv\Lib\site-packages\bitsandbytes\autograd_functions.py:186: UserWarning: MatMul8bitLt:
inputs will be cast from torch.bfloat16 to float16 during quantization
warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization")
Error named symbol not found at line 448 in file D:\a\bitsandbytes\bitsandbytes\csrc\ops.cu
Invoke process exited with code 1
What happened
after installing Flux Kotext the following error happen...
What you expected to happen
generate the image..
How to reproduce the problem
No response
Additional context
No response
Discord username
No response