-
Notifications
You must be signed in to change notification settings - Fork 10.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove convert-lora-to-ggml.py #7204
Conversation
2e33fda
to
c05d947
Compare
If I read the examples/finetune right it states that only llama models are supported. And with the What is the plan going forward? #7225? |
for those who are looking for the removed script https://gist.github.com/jedt/87fad3f671589e09d3709e33d29817a4 |
what is the recommanded ways to merge a finetuned lora into base model now? |
You can merge the model using |
For the ones that we dont know very much about this... do you have any document on how it should be the procedure now the script has been removed? |
Because of this: ggml-org/llama.cpp#7204 The python script `convert-lora-to-ggml.py` was removed. So, we clone the repo on a previous Tag/Commit. It would be nice to study the analysed proposed to the script.
Because of this: ggml-org/llama.cpp#7204 The python script `convert-lora-to-ggml.py` was removed. So, we clone the repo on a previous Tag/Commit. It would be nice to study the analysed proposed to the script.
Because of this: ggml-org/llama.cpp#7204 The python script `convert-lora-to-ggml.py` was removed. So, we clone the repo on a previous Tag/Commit. It would be nice to study the analysed proposed to the script.
Because of this: ggml-org/llama.cpp#7204 The python script `convert-lora-to-ggml.py` was removed. So, we clone the repo on a previous Tag/Commit. It would be nice to study the analysed proposed to the script. Signed-off-by: Jose Gato <jgato@redhat.com>
Because of this: ggml-org/llama.cpp#7204 The python script `convert-lora-to-ggml.py` was removed. So, we clone the repo on a previous Tag/Commit. It would be nice to study the analysed proposed to the script. Signed-off-by: Jose Gato <jgato@redhat.com>
I feel like this question needs to be answered: what does "using |
I think it means once you have completed training your model, you merge your base model with your adapter weights using the model.merge_and_unload() command in hf. This thread has a discussion on it. Hope this helps! |
Thanks, @shreyasrajesh0308 I saw the thread and what I understood is that these things work at Python level, so "code", instead using the script was more "CI friendly", if I'm explaining myself. |
Changes such as permutations to the tensors during model conversion makes converting loras from HF PEFT unreliable, so to avoid confusion I think it is better to remove this entirely until this feature is re-evaluated. It is still possible to use loras created with the finetune example.