Genuinely curious.

Why do you like LLMs? What hopes do you have for AI & AGI in our near and distant future?

  • rufus
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    8GB of regular RAM? That’s not much. No, that won’t cut it if you also want all the bells and whistles. Maybe try something like the Mistral-7b-OpenOrca with llama.cpp quantized to 4bit and without the STT and TTS. It’s small and quite decent. Otherwise you might want to rent a Cloud-GPU by the hour on something like runpod.io or use free services like Google Colab or you really need to upgrade.