Hey all, I am in the process of testing several models for fine-tuning and that question cropped up.

I would like to add new facts to a foundational model and then train it for instruction tuning. Problem is, I will regularly have new data to add. I was wondering if there is a change that I could do a single LORA for the instruction tuning and reapply it each time I finished a new fine-tuning?

  • namnnumbr@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    I think I get what you’re after now. I’ll have to think on this further - interesting problem!