I’ve been an IT professional for 20 years now, but I’ve mainly dealt with Windows. I’ve worked with Linux servers through out the years, but never had Linux as a daily driver. And I decided it was time to change. I only had 2 requirements. One, I need to be able to use my Nvidia 3080 ti for local LLM and I need to be able to RDP with multiple screens to my work laptop running Windows 10.

My hope was to be able to get this all working and create some articles on how I did it to hopefully inspire/guide others. Unfortunately, I was not successful.

I started out with Ubuntu 22.04 and I could not get the live CD to boot. After some searching, I figured out I had to go in a turn off ACPI in boot loader. After that I was able to install Ubuntu side by side with Windows 11, but the boot loader errored out at the end of the install and Ubuntu would not boot.

Okay, back into Windows to download the boot loader fixer and boot to that. Alright, I’m finally able to get into Ubuntu, but I only have 1 of my 4 monitors working. Install the NVIDIA-SMI and reboot. All my monitors work now, but my network card is now broken.

Follow instructions on my phone to reinstall the linux-modules-extra package. Back into Windows to download that because, you know, no network connections. Reinstall the package, it doesn’t work. Go into advanced recovery, try restoring packages, nothing is working. I can either get my monitors to work or my network card. Never both at the same time.

I give up and decide it’s time to try out Fedora. The install process is much smoother. I boot up 3 of 4 monitors work. I find a great post on installing Nvidia drivers and CUDA. After doing that and rebooting, I have all 4 monitors and networking, woohoo!

Now, let’s test RDP. Install FreeRDP run with /multimon, and the screen for each remote window is shifted 1/3 of the way to the left. Strange. Do a little looking online, find an Issue on GitHub about how it is based on the primary monitor. Long story short, I can’t use multiple monitor RDP because I have different resolution monitors and they are stacked 2x2 instead of all in a row. Trust me I tried every combination I could think of.

Someone suggested using the nightly build because they have been working on this issue. Okay, I try that out and it fails to install because of a missing dependency. Apparently, there is a pull request from December to fix this on Fedora installs, but it hasn’t been merged. So, I would need to compile that specific branch myself.

At this point, I’m just so sick of every little thing being a huge struggle, I reboot and go back into Windows. I still have Fedora on there, but who would have thought something that sounds as simple as wanting to RDP across 4 monitors would be so damn difficult.

I’m not saying any of this to bag on Linux. It’s more of a discussion topic on, yes, I agree that there needs to be more adoption on Linux, but if someone with 20 years of IT experience gets this feed up with it, imagine how your average user would feel.

Of course if anyone has any recommendation on getting my RDP working, I’m all ears on that too.

  • rufus
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    8 months ago

    The accustomed workflows sometimes don’t translate well to other platforms. RDP might be such a case, I don’t think it’s the standard in the Linux-world, maybe try the standard solution of your distribution, or look up which one is good for multi-monitor setups, there are lots of other VNC solutions. Yeah, and I’d skip Ubuntu as a first choice, but you figured that out the hard way.

    • 520@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      8 months ago

      Can confirm. SSH is the standard under Linux. OP will be happy to note that Windows has an inbuilt SSH client since Windows 10 that functions nearly identical to its Linux equivalent.

        • 520@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          8 months ago

          It is for what OP wants to use it for. VNC is best left to graphical applications (hint: maybe don’t use your graphics card to draw UIs when you want to use it to train LLMs. Most tools for which don’t have a GUI under Linux anyway for this reason.)

          • rufus
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Yeah, I don’t really know what OP is trying to achieve. Especially with the 4 monitors attached to the NVidia card and the incoming or outgoing(4) 4 RDP sessions(?)

            The AI / LLM tools I use have a web-interface, so I use a browser to connect to that.

            • 520@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              He’s got a local LLM that he either wants to train or use. Both tasks use the GPU for processing simply because it is faster for that kind of thing. Thus you don’t want it drawing UI stuff at the same time.

              • rufus
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                8 months ago

                That sounds a bit excessive. Sure, you wouldn’t start a demanding game on the same machine. But doing desktop stuff or programming is fine. You’ll probably not even notice if a fine-tuning run takes 10 hours or 10 hours plus 2 minutes. I think the thing hobbyists are concerned with is nothing eating into their VRAM since it’s kind of a scarce resource. So not load textures or run applications that lock a certain amount of VRAM for their own use. I’m not an expert on GPUs. I mean the desktop is there and needs to draw stuff whether you use it directly or via VNC. A frame of a single screen takes up like 6MB, so if you have triple-buffering and 4 screens that’d take up something like half a percent of a 16GB graphics card (if my math is correct). You can always stop the desktop, use SSH and web-based interfaces. I do it since it’s convenient, not because it saves me some resources. But if that’s the few megabytes that are missing for your use-case… I suppose it’s also a valid reason to do so. And yes RDP and stuff needs to grab frames, compress them and send them out over the network. But I think compression and stuff is handled by dedicated parts of an GPU that aren’t used by LLM inference or training anyways. I’d really be surprised if any of this made a noticeable difference.

                • 520@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  8 months ago

                  That sounds a bit excessive.

                  Then you’ve never tried running one locally. LLMs are not your standard desktop application. They take A LOT of GPU resources. And if it runs on the GPU then it has to use VRAM. And you’d be surprised how limiting anything less than 8GB can be.

                  Put it this way, my 8GB 4060 will not be able to straight up generate a single 1080p image in Stable Diffusion. It runs out of VRAM. Yes, it’s a different use case because I’m generating an image but the principle applies to LLMs too.

                  Unless he’s got an Intel integrated chip he can offload the UI rendering to. That’s my setup.

                  • rufus
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    8 months ago

                    I currently have a local LLM loaded. But a quantized smaller one, and that machine doesn’t have a GUI/Desktop environment installed, since I operate it through SSH and a webinterface from my laptop.

                    If I may ask: How much VRAM does a destop environment actually take up if I were to use one on the same graphics card? My intel iGPU on that laptop won’t tell me. This is probably the only constraining factor… If at all. If we’re talking about the computing, even my old laptop shows like 1-3% GPU utilization with several windows and applications open. It momentarily spikes to like 10% if I start grabbing a window and moving it around like crazy, a bit more when playing YouTube. But apart from that, even the 7 year old intel iGPU is hardly bothered at all with drawing the desktop, a browser and a few other things.