Skip to content

Commit d2a6a6e

Browse files
Fix llama model o_proj lora_ids passing for finite lorax (#575)
This is regarding the issue reported in #572 The finite lorax feature failed to execute when testing on a llama adapter ([jumip/llama-lora-adapter](https://huggingface.co/jumip/llama-lora-adapter)) that contains o_proj as target module. Work in progress -- test to be added to avoid future regression Signed-off-by: Jou-An Chen <[email protected]>
1 parent 247df36 commit d2a6a6e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

QEfficient/transformers/models/llama/modeling_llama.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -174,7 +174,7 @@ def forward(
174174
)
175175

176176
attn_output = attn_output.reshape(*input_shape, -1).contiguous()
177-
attn_output = self.o_proj(attn_output)
177+
attn_output = self.o_proj(attn_output, **kwargs)
178178
return attn_output, attn_weights, past_key_value
179179

180180

0 commit comments

Comments
 (0)