Hey everyone!

I think it’s time we had a fosai model on HuggingFace. I’d like to start collecting ideas, strategies, and approaches for fine-tuning our first community model.

I’m open to hearing what you think we should do. We will release more in time. This is just the beginning.

For now, I say let’s pick a current open-source foundation model and fine-tune on datasets we all curate together, built around a loose concept of using a fine-tuned LLM to teach ourselves more bleeding-edge technologies (and how to build them using technical tools and concepts).

FOSAI is a non-profit movement. You own everything fosai as much as I do. It is synonymous with the concept of FOSS. It is for everyone to champion as they see fit. Anyone is welcome to join me in training or tuning using the workflows I share along the way.

You are encouraged to leverage fosai tools to create and express ideas of your own. All fosai models will be licensed under Apache 2.0. I am open to hearing thoughts if other licenses should be considered.


We’re Building FOSAI Models! 🤖

Our goal is to fine-tune a foundation model and open-source it. We’re going to start with one foundation family with smaller parameters (7B/13B) then work our way up to 40B (or other sizes), moving to the next as we vote on what foundation we should fine-tune as a community.


Fine-Tuned Use Case ☑️

Technical

  • FOSAI Model Idea #1 - Research & Development Assistant
  • FOSAI Model Idea #2 - Technical Project Manager
  • FOSAI Model Idea #3 - Personal Software Developer
  • FOSAI Model Idea #4 - Life Coach / Teacher / Mentor
  • FOSAI Model Idea #5 - FOSAI OS / System Assistant

Non-Technical

  • FOSAI Model Idea #6 - Dungeon Master / Lore Master
  • FOSAI Model Idea #7 - Sentient Robot Character
  • FOSAI Model Idea #8 - Friendly Companion Character
  • FOSAI Model Idea #9 - General RPG or Sci-Fi Character
  • FOSAI Model Idea #10 - Philosophical Character

OR

FOSAI Foundation Model ☑️


Foundation Model ☑️

(Pick one)

  • Mistral
  • Llama 2
  • Falcon
  • ..(Your Submission Here)

Model Name & Convention

  • snake_case_example
  • CamelCaseExample
  • kebab-case-example

0.) FOSAI ☑️

  • fosai-7B
  • fosai-13B

1.) FOSAI Assistant ☑️

  • fosai-assitant-7B
  • fosai-assistant-13B

2.) FOSAI Atlas ☑️

  • fosai-atlas-7B
  • fosai-atlas-13B

3.) FOSAI Navigator ☑️

  • fosai-navigator-7B
  • fosai-navigator-13B

4.) ?


Datasets ☑️

  • TBD!
  • What datasets do you think we should fine-tune on?

Alignment ☑️

To embody open-source mentalities, I think it’s worth releasing both censored and uncensored versions of our models. This is something I will consider as we train and fine-tune over time. Like any tool, you are responsible for your usage and how you choose to incorporate into your business and/or personal life.


License ☑️

All fosai models will be licensed under Apache 2.0. I am open to hearing thoughts if other licenses should be considered.

This will be a fine-tuned model, so it may inherit some of the permissions and license agreements as its foundation model and have other implications depending on your country or local law.

Generally speaking, you can expect that all fosai models will be commercially viable through the selection process of its foundation family and the post-processing steps that are fine-tuning the model.


Costs

I will be personally covering all training and deployment costs. This may change if I choose to put together some sort of patronage, but for now - don’t worry about this. I will be using something like RunPod or some other custom deployed solution for training.


Cast Your Votes! ☑️

Share Your Ideas & Vote in the Comments Below! ✅

What do you want to see out of this first community model? What are some of the fine-tuning ideas you’ve wanted to try, but never had the time or chance to test? Let me know in the comments and we’ll brainstorm together.

I am in no rush to get this out, so I will leave this up for everyone to see and interact with until I feel we have a solid direction we can all agree upon. There will be plenty of more opportunities to create, curate, and customize more fosai models I plan to release in the future.

Update [10/25/23]: I may have found a fine-tuning workflow for both Llama (2) and Mistral, but I haven’t had any time to validate the first test run. Once I have a chance to do this and test some inference I’ll be updating this post with the workflow, the models, and some sample output with example datasets. Unfortunately, I have ran out of personal funds to allocate to training, so it is unsure when I will have a chance to make another attempt at this if this first attempt doesn’t pan out. Will keep everyone posted as we approach the end of 2023.

  • Audalin@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    If you’re going to finetune a foundation model, it’d make sense to choose Mistral - once they release a 13B.

    Also consider adding function calling to the home assistant use case.

    • Blaed@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Mistral seems to be the popular choice. I think it’s the most open-source friendly out of the bunch. I will keep function calling in mind as I design some of our models! Thanks for bringing that up.

      • django
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Mistral is an excellent choice. I am thoroughly impressed by the capability of these 7b models.

  • librecat@lemmy.basedcount.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Are the llama2 models Apache 2.0 compatible? I think they use a custom license with some restrictions, could be totally wrong though.

    • Blaed@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      This will be a fine-tuned model, so it may inherit some of the permissions and license agreements as its foundation model and have other implications depending on your country or local law.

      You are correct, if we chose Llama 2 - the fine-tune derivative may be subject to their original license terms. However, Apache 2.0 would apply and transfer to something like a fine-tuned version of Mistral, since its base license is also Apache 2.0.

      If there is enough support - I’d be more than open to creating an entirely new foundation model family. This would be a larger undertaking than this initial fine-tuning deployment, but building a completely free FOSAI foundation family of models was the penultimate goal of this project so if this garners enough attention I could absolutely put energy and focus into creating another Mistral-like product instead of splashing around with fine-tuning.

      Whatever would help everyone the most! I like where you’re thinking though, I’m going to update the thread to include an option to vote for a new foundation family instead. At the end of the day, it’s likely I’ll do all of the above - I’m just not sure in what order yet…

      • ffhein@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        You are correct, if we chose Llama 2 - the fine-tune derivative may be subject to their original license terms

        The first time I read through the Llama 2 license I thought it said that any llama derivate work also had to be licensed under the same license, but reading it again I think it sounds like the only requirement is that you include a copy of the llama-2 license text. Though I suppose that if someone uses your l2 fine-tune to create something, it would also count as “llama 2 derivate work” and thus be affected by the original license. I’m obviously no license lawyer but personally I wouldn’t want risk a legal battle with a company the size of Meta, so I’d vote for the other options just to be one the safe side.

        If there is enough support - I’d be more than open to creating an entirely new foundation model family.

        Do you have the resources for this to be a viable option? Llama-2 7b used 184320 GPU hours on A100-80GB, and while the exact numbers for Mistral haven’t been revealed some article claims it was around 200k hours (which we don’t know if they were A100 or H100 hours). And if you have that kind of money to spend, are you confident that the end result will be better than Mistral? If not, why spend that much on creating something equivalent or possibly even inferior? Then there’s also the question of how long a model is going to be relevant before some other new model with all the latest innovations is released and makes everything else look outdated… Even if you can create a model which rivals llama-2 and mistral now, are you going to create a new one to compete with llama-3 and mistral-2 when those come along?

        Sorry for the negativity but I think creating a base model sounds likely to be a massive waste of resources. If you have a lot of time and money to throw at this project, I think it would be much better spent on fine-tuning existing models.

        • Blaed@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I wouldn’t want risk a legal battle with a company the size of Meta, so I’d vote for the other options just to be one the safe side.

          Completely reasonable, I agree.

          Do you have the resources for this to be a viable option?

          Where there’s a will, there’s a way. I could muster the resources for a foundation model, but it’s definitely not the most optimal option we have at our disposal. The original plan was a.) fine-tune a small series (short-term) b.) release a foundation model (long-term). I only recently considered skipping Plan A, but I’m glad I’ve got feedback to prevent me from doing otherwise. Would’ve enjoyed the process nonetheless.

          Are you confident that the end result will be better than Mistral? If not, why spend that much on creating something equivalent or possibly even inferior?

          Of course not. I don’t do this to be the best. I offer to do this to understand. To document how to build and release a foundation model from start to finish is knowledge that could be valuable to someone else - which is why I was willing to skip ahead if that was a topic others wanted to dive more into. For me, it’s more about the friends we make along the way. There is grace in polishing a product and being the best, but I’d like to think there is also something special in doing something just to document it for others. There is something fulfilling exploring a new frontier with nothing but sheer curiosity.

          Then there’s also the question of how long a model is going to be relevant before some other new model with all the latest innovations is released and makes everything else look outdated… Even if you can create a model which rivals llama-2 and mistral now, are you going to create a new one to compete with llama-3 and mistral-2 when those come along?

          I also don’t do this to be relevant. To be a part of the this is enough for me. In my studies, I have found something bigger than me - I see myself doing this for many years so I know I’ll be around to see it evolve and current technologies become irrelevant in time. If you consider existing alongside these models as ‘competing’ then yes, I would be doing that I suppose.

          Sorry for the negativity but I think creating a base model sounds likely to be a massive waste of resources. If you have a lot of time and money to throw at this project, I think it would be much better spent on fine-tuning existing models.

          Don’t worry, it was very great feedback. Exactly why I made this post! I’m glad you made all your points. It’s the same logic I had (and the same logic I was willing to throw aside for others). At this point, it seems like fine-tuning is what most of you want to see. So fine-tuning it shall be!

      • Anony Moose@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I don’t have too much experience with deep learning, I’m just an enthusiastic spectator. With that said, it seems to me that it would help to build some momentum first with a finetuned foundational model based on an existing model. That would make it more feasible to set our eyes on the goal of a new foundation model in the future with a win under our belt.

        Thanks so much for doing this, this seems really cool!

        • Blaed@lemmy.worldOPM
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I appreciate your comment! It seems like we’re going the fine-tuning route. I think it’s the best way to do it too. I’m still glad I floated around the foundation model idea. We’ll get one of our own eventually!

          Welcome to the show! Enthusiast or not, you are part of !fosai@lemmy.world. Your input is valued and your curiosity is encouraged!

  • Wander@yiffit.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Thank you for doing this, can I ask what’s the difference between a FOSAI model and other llama2 model that people are creating?

    • Blaed@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      It seems like we’ll be starting with Mistral - which means the model will be completely open-source under the Apache 2.0 License.

      All fine-tunings I release under fosai would be licensed under the same Apache 2.0 agreement, giving you and everyone else complete permissions to modify, download, distribute, and deploy this model as you see fit. It would make the model commercially viable out-of-the-box without any restrictions set by a corporation or entity.

      I’m also not a copyright lawyer, so someone correct me If I’m wrong here but if I fine-tune Mistral (which I probably will) and also release the derivative under the Apache 2.0 license - you own the version you choose to download completely. You don’t need to adhere to a usage policy. You are still responsible for what you end up doing with your model (within all local applicable laws), but you also don’t have to worry about Meta (or some other entity) revoking or changing their policy/usage/terms at some point in the future. You are free to do whatever you want with an Apache licensed model.

      At the end of the day, Llama 2 is owned and distributed by Meta AI, which has some of those restrictions I mentioned, even though it is somewhat open-source. Here is the license. Some notes from it that might be worth mentioning:

      • You need to credit Meta whenever you share Llama 2 by including a specific notice.
      • You have to follow all laws and regulations when using Llama 2 and also adhere to Meta’s usage policy.
      • You can’t use Llama 2 to make or improve other similar software (large language models), except Llama 2 itself or things derived from it.
      • If your company or its affiliates have more than 700 million users a month, you can’t just use this agreement. You have to ask Meta for special permission.
  • keepthepace@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Do you have any plan to do reinforcement learning fine tuning? I really feel like this is the correct way to teach coding to a model: with good enough test cases, computing the reward is straightforward.

  • muntedcrocodile@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I would suggest mixtral8x7b as a base. Then do some more training of it with data from 4chan, banned books anarchists handbook etc. I know this sounds a little immoral whatever whatever but training ai models on less than savery content tends to make them more truthfull. And hell the goverments of the world are going to make models like this for propaganda etc if they havnt already and a world where eveeyone has a gun is a lot safer than the world where only one guy has one.