Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove convert-lora-to-ggml.py #7204

Merged
merged 2 commits into from
May 12, 2024
Merged

remove convert-lora-to-ggml.py #7204

merged 2 commits into from
May 12, 2024

Conversation

slaren
Copy link
Member

@slaren slaren commented May 10, 2024

Changes such as permutations to the tensors during model conversion makes converting loras from HF PEFT unreliable, so to avoid confusion I think it is better to remove this entirely until this feature is re-evaluated. It is still possible to use loras created with the finetune example.

@slaren slaren force-pushed the sl/rm-convert-lora branch from 2e33fda to c05d947 Compare May 10, 2024 22:01
@mofosyne mofosyne added refactoring Refactoring Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix labels May 11, 2024
@slaren slaren merged commit b228aba into master May 12, 2024
60 checks passed
@slaren slaren deleted the sl/rm-convert-lora branch May 12, 2024 00:29
@icfly2
Copy link

icfly2 commented May 16, 2024

If I read the examples/finetune right it states that only llama models are supported. And with the export-lora I'm not having much luck either with a Phi3 based model that I could previously convert without issues with the removed .py script.

What is the plan going forward? #7225?

@jedt
Copy link

jedt commented May 19, 2024

for those who are looking for the removed script https://gist.github.com/jedt/87fad3f671589e09d3709e33d29817a4

@miaoshouai
Copy link

what is the recommanded ways to merge a finetuned lora into base model now?
most of the online documents are refering to use convert-lora-to-ggml and now it is deprecated.

@slaren
Copy link
Member Author

slaren commented May 21, 2024

You can merge the model using merge_and_unload and then convert it to gguf with convert-hf-to-gguf.py.

@jgato
Copy link

jgato commented Jun 20, 2024

You can merge the model using merge_and_unload and then convert it to gguf with convert-hf-to-gguf.py.

For the ones that we dont know very much about this... do you have any document on how it should be the procedure now the script has been removed?

jgato added a commit to jgato/instructlab that referenced this pull request Jun 20, 2024
Because of this:

ggml-org/llama.cpp#7204

The python script `convert-lora-to-ggml.py` was
removed. So, we clone the repo on a previous
Tag/Commit.

It would be nice to study the analysed proposed
to the script.
jgato added a commit to jgato/instructlab that referenced this pull request Jun 20, 2024
Because of this:

ggml-org/llama.cpp#7204

The python script `convert-lora-to-ggml.py` was
removed. So, we clone the repo on a previous
Tag/Commit.

It would be nice to study the analysed proposed
to the script.
jgato added a commit to jgato/instructlab that referenced this pull request Jun 20, 2024
Because of this:

ggml-org/llama.cpp#7204

The python script `convert-lora-to-ggml.py` was
removed. So, we clone the repo on a previous
Tag/Commit.

It would be nice to study the analysed proposed
to the script.
jgato added a commit to jgato/instructlab that referenced this pull request Jun 21, 2024
Because of this:

ggml-org/llama.cpp#7204

The python script `convert-lora-to-ggml.py` was
removed. So, we clone the repo on a previous
Tag/Commit.

It would be nice to study the analysed proposed
to the script.

Signed-off-by: Jose Gato <jgato@redhat.com>
jgato added a commit to jgato/instructlab that referenced this pull request Jun 21, 2024
Because of this:

ggml-org/llama.cpp#7204

The python script `convert-lora-to-ggml.py` was
removed. So, we clone the repo on a previous
Tag/Commit.

It would be nice to study the analysed proposed
to the script.

Signed-off-by: Jose Gato <jgato@redhat.com>
@rascasoft
Copy link

rascasoft commented Jun 25, 2024

You can merge the model using merge_and_unload and then convert it to gguf with convert-hf-to-gguf.py.

For the ones that we dont know very much about this... do you have any document on how it should be the procedure now the script has been removed?

I feel like this question needs to be answered: what does "using merge_and_unload" means for someone who have previously just used convert-lora-to-ggml.py and now can't do this anymore?

@shreyasrajesh0308
Copy link

I think it means once you have completed training your model, you merge your base model with your adapter weights using the model.merge_and_unload() command in hf. This thread has a discussion on it. Hope this helps!

@rascasoft
Copy link

Thanks, @shreyasrajesh0308 I saw the thread and what I understood is that these things work at Python level, so "code", instead using the script was more "CI friendly", if I'm explaining myself.
Have you got any suggestions/examples about an entire workflow in python usable inside a pipeline?

@ltoniazzi ltoniazzi mentioned this pull request Jul 6, 2024
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
refactoring Refactoring Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants