this post was submitted on 04 Oct 2023
10 points (85.7% liked)

Free Open-Source Artificial Intelligence

2871 readers
24 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS
 

Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Turun@feddit.de 1 points 1 year ago (1 children)

At least in stable diffusion Loras are composable. You can combine different loras and have both effects applied to the resulting image.

[โ€“] keepthepace@slrpnk.net 1 points 1 year ago

Yes, but my understanding is that they are commutable? (i.e. the order does not matter) If so, it looks like that a "facts-adding" LORA seem to induce forgetting of formatting.

And I am especially curious if a facts-LORA + a instructions-LORA results in a model that can use the new facts in the instructions or not. I'll run experiments but would have loved if people here knew about it already.