Hello friends,

I’m pretty deep into self-hosting - especially on the home automation side. I’ve got a couple of options for self-hosted AI, but I don’t think they’ll meet my long term goals:

  • Coral TPUs: I have 2x processing my Frigate data. These seem fine for that purpose, but not useful for generative AIs?

  • Jetson Nano: Near as I can tell nothing supports these things except DeepStack, which appears to be abandoned. Bummed these haven’t gotten broader support in the community.

I’ve got plenty of rack space and my day job is managing thousands of machines, so not afraid of a more technical setup.

The used NVIDIA rack mounted Tesla GPU servers look interesting. What are y’all using?

Requirements:

  • Rack mounted
  • Supports local LLM and GenAI
  • Linux-based
  • Works with Docker
  • maggio
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    My friend did this with a RTX 3060 12GB, and documented the process in this Octopusx blog post

    If you have any questions we’d be happy to help

  • tehnomad@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The best consumer NVIDIA card is the 3090ti because of its 24GB memory, so you can run bigger LLM models. I have a 3060ti 12GB which works pretty well with 7B and 13B LLM models.

  • Aw3som3Guy@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Don’t have direct experience with either, but:

    It’s my understanding that a corral tpu is exclusively an inference accelerator, no training or more generative applications. Also, corral TPUs are a little bit unobtainium, with the only options I’ve seen behind scalped about as much as a pi, to basically the same result.

    I think you’re overthinking the nano a bit. I’m not sure that you’d need explicit support for the nano, because it’s just a cuda gpu and so it should^TM just run anything cuda, as long as the arm cpu doesn’t trip the software up . For example, I’ve seen people running blender renders across a cluster of jetsons, just because, and I doubt that blender has any explicit support for jetsons.

    If you’re coming at it from the sense that you have rack space to spare, a used Tesla / Quadro gpu would probably be better value than a jetson nano OG, because those were I think 2GB/4GB and 256 Kepler era cuda cores. You’d almost have to go out of your way to find a worse PCIe card, plus a normal PCIe card in a normal x86 server wouldn’t have arm software restrictions. Although as the other commenter mentioned, cooling/power draw is a more serious consideration for a PCIe card, plus the risks of buying used.

    • Trustworthy_Fartzzz@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I totally agree on the Coral TPUs. Great for Frigate, but not much else. I’ve got 2x of the USB ones cranking on half a dozen 4K stream - works wonderfully.

      And I agree in theory these Nanos should be great for all sorts of stuff, but nothing supports them. Everything I’ve seen is custom one offs outside of DeepStake (though CodeProject.AI purports there’s someone now working on a Nano port).

      Sounding like a decent gaming GPU and a 2-3U box is the ticket here.

    • seanpmassey@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Point of pedantry- the Nano uses a Tegra X1 as its SoC. It has a Maxwell generation GPU, not Kepler.

      The new Jetson Orin Nano uses an Ampere GPU.

  • Jaff_Re@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Tesla P40 is a good low budget option, it has 24gb and CUDA cores. I’ve tried running 13b LLMs with 1 and it did well, plus you can afford multiple if you have enough slots

  • flossraptor@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nvidia is the only game in town right now. I decided on a 3090 for the time being, with the option of adding another one later. I think in two years we will have 100x better options specifically tailored for AI.

  • seanpmassey@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It depends.

    What is your budget? And what hardware/hypervisor do you have?

    And what specifically are you looking to do with “generative AI?” Ugh…I hate that term.

    There are two key things to keep in mind about rack-mount GPUs. First, you need servers that are specifically built to host most GPUs in the factory. Almost all of NVIDIA’s server-grade GPUs are passively cooled, so the servers need to have a fan configuration to cool the GPUs. And except for the lowest end server GPUs (P4/T4/A2/L4 - all Inference cards and over $1000 per card) which draw less than the 75 watts provided by the PCI slot, all of the GPUs require at least 150 watts, molex power connectors and higher wattage power supplies.

    And most of the drivers and docker/kubernetes plugins for these GPUs are locked behind NVIDIA licensing.

    You’d want something that is at least Pascal-generation, but the Turing or newer cards are better.

    Your better bet is to get a rack-mount workstation (which is basically a server anyway) and stick a higher-end Quadro or GeForce 30x0 card in there.

    Edit: I never answered what I have - an R730 factory built for GPUs with a pair of Tesla P4 cards. I originally built it to play with GPUs for VDI.

    • Trustworthy_Fartzzz@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Much appreciated — I think the rack mounted desktop GPU approach is best for now. Another commenter suggested we should see better options in 1-2 years and I strongly suspect they’re correct.

  • s3r3ng@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    A 4090 is good enough for running many models. You probably want an A6000 for larger ones. But many models that don’t fit in your VRAM can be scaled down without much loss of effectiveness.