Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while trying to merge two loras: mismatch tensor #14

Open
zBilalz opened this issue Jul 27, 2024 · 3 comments
Open

Error while trying to merge two loras: mismatch tensor #14

zBilalz opened this issue Jul 27, 2024 · 3 comments

Comments

@zBilalz
Copy link

zBilalz commented Jul 27, 2024

The error I received:

Error occurred when executing LoraMerger|cgem156:

The size of tensor a (64) must match the size of tensor b (32) at non-singleton dimension 1

File "C:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\cgem156-ComfyUI\scripts\lora_merger\merge.py", line 46, in lora_merge
lora = self.merge(lora_1, lora_2, mode, rank, threshold, device, dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\name\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI\custom_nodes\cgem156-ComfyUI\scripts\lora_merger\merge.py", line 98, in merge
up = up_1 + up_2

afbeelding

@laksjdjf
Copy link
Owner

add mode can only be used between LoRAs of the same rank; try svd mode.

@zBilalz
Copy link
Author

zBilalz commented Jul 27, 2024

add mode can only be used between LoRAs of the same rank; try svd mode.

Working now, thanks

@ScrapWare
Copy link

Not needed SVD it's very slow... I developed fastest lite merge methods on logically. Anyone could every updown value stacking using torch.add with doubled alpha. These not related almost quality. Simply split already stacked dims If lots dims. If less dims, clone enough dims on torch.cat. LoHa is can convert LoRA updown hada_w1_a * hada_w1_b(up) and similar hada_w2_ab(down) it implements through few line codes.

However, unnecessary split(same famous SVD) code can change to any quantization in lots dims.

Not enough research LoKr... now use SVD, haven't testing LoKr then taraining factor=1to8 Lokr and ia3 lora-fa glora full from on now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants