So I am trying to track down what is possibly slowing down my download connection from my Debian server to my devices (streaming box, laptop, other servers, etc).

First let me go over my network infrastructure: OPNsense Firewall (Intel C3558R) <-10gb SFP+ DAC-> Managed Switch <-2.5gb RJ45-> Clients, 2.5gb AX Access Point, and Debian Server (Intel N100).

Under a 5 minute stress test between my laptop (2.5gb adapter plugged into switch) and the Debian Server (2.5gb Intel I226-V NIC), I get the full bandwidth when uploading however when downloading it tops out around 300-400mbps. The download speed does not fair any better when connecting to the AX access point, with upload dropping to around 500mbps. File transfers between the server and my laptop are also approximately 300mbps. And yes, I manually disabled the wifi card when testing over ethernet. Speed tests to the outside servers reflect approximately 800/20mbps (on an 800mbps plan).

Fearing that the traffic may be running through OPNsense and that my firewall was struggling to handle the traffic, I disconnected the DAC cable and reran the test just through the switch. No change in results.

Identified speeds per device:

Server: 2500 Mb/s
Laptop: 2500Base-T
Switch: 2,500Mbps
Firewall: 10Gbase-Twinax

Operating Systems per device:

Server: Debian Bookworm
Laptop: macOS Sonoma (works well for my use case)
Switch: some sort of embedded software
Firewall: OPNsense 24.1.4-amd64

Network Interface per device:

Server: Intel I226-V
Laptop: UGreen Type C to 2.5gb Adapter
Switch: RTL8224-CG
Firewall: Intel X553

The speed test is hosted through Docker on my server.

  • Suzune@ani.social
    link
    fedilink
    English
    arrow-up
    25
    ·
    9 months ago

    Did you use iperf? It makes sure that HDD/SSD is not the bottleneck.

    You can also check the statistics and watch for uncommon errors. Or trace the connection with tcpdump.

    • dontwakethetrees (she/her)@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      Using iperf3 results in 2.5gb of bandwidth. SSD should not be a bottleneck, the server only has NVME storage and the laptop SSD is located in the SoC. Both far exceeding the network speeds. Traceroute indicated just a single hop to the server.

      • gaf@borg.chat
        link
        fedilink
        English
        arrow-up
        12
        ·
        9 months ago

        NVMe drives aren’t guaranteed to be fast. Based on those stats I’m guessing you have QLC and no DRAM.

        • dontwakethetrees (she/her)@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 months ago

          I think you might be right, couldn’t find an identifiable label on the drive and the model reported in Debian shows up in searches as having only 2465MB/s read speeds. After real-world losses and also handling running an OS + multiple services I imagine that could me the source of my problems. Thanks!

          • shadeless
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            You can do a disk benchmark on the server to be sure

      • ninjan@lemmy.mildgrim.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Ah, right, read to fast it seems! Though that still leaves the possibility of software firewalls, but any OOTB ones wouldn’t be doing any packet inspection.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      15
      ·
      9 months ago

      rsync and rclone both rely on disk performance. iperf3 is the best way to test network performance.

      Note that the Windows version of iperf is unofficial and very old now, so you really want to use two Linux systems if you’re testing with iperf.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          This is a good point. I know the WSL team were doing some optimizations to improve the performance of iperf3 in WSL, but I haven’t tested it.

  • pearsaltchocolatebar@discuss.online
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    Have you tried changing out ethernet cables and trying different ports?

    Also, try hosting the speed test from your laptop and running the speed test from the server to see if the results are reversed.

    • dontwakethetrees (she/her)@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      Just attempted that, odd thing happened was that both evened out on the reverse test at ~800Mbp/s. So higher than the download test before and lower on the upload. Conducted iperf3 tests and that shows the 2.5gb bandwidth so I retried file sharing. Samba refused to work for whatever reason on Debian so I conducted a SCP transfer and after a few tests of a 6.3GB video file, I averaged around 500mbps (highs of around 800mbp/s and lows of around 270mbp/s).

      • filister@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        9 months ago

        SCP encrypts your traffic before sending it, so it might be CPU/RAM bottleneck. You can try with different cypher or different compression levels, which are defined in your .ssh/config file.

      • emptiestplace@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        iperf3 showed 2.5 in both directions?

        -R reverses direction

        Also note it can be set up as a daemon - I like to have at least one available on every network I have to deal with.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    IP Internet Protocol
    NVMe Non-Volatile Memory Express interface for mass storage
    SCP Secure Copy encrypted file transfer tool, authenticates and transfers over SSH
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    TCP Transmission Control Protocol, most often over IP

    [Thread #644 for this sub, first seen 31st Mar 2024, 06:35] [FAQ] [Full list] [Contact] [Source code]

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    9 months ago

    Try switching to bbr for congestion control, and adjust the buffer sizes. The defaults are good for Gigabit but not really for higher speeds. Not near my computer right now so I can’t grab a copy of my sysctl settings, but searching Google for “Linux TCP buffer size tuning” and “Linux enable bbr” should find some useful info.

    If the devices are different speeds (eg one system is 2.5Gbps but another is 1Gbps), try enabling flow control on the switch, if it’s a managed switch.

    • ErwinLottemann@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 months ago

      i think speedtest data is ot read or written to or from the disk but generates in memory or just ‘thrown away’

    • dontwakethetrees (she/her)@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I mean, compared to what it should be, it is. Especially when I paid for 2.5gb infrastructure.

      And it also affects how fast I can pull files from my server. Trying to get some shows downloaded to my laptop before a business trip, guess better prepare for an hour or two copy over LAN. Pulling a backup OS image for my devices? Going to wait for a while.

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    9 months ago

    Try to execute

    ping -c 1000 1.1.1.1
    

    And check for any packet loss and jitter.

    Additionally I would also recommend trying a different test server and comparing the results.

    Keep in mind that your ISP might also have issues with the connectivity which can be fixed in the following days.

      • filister@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        Sorry in that case I would recommend you do iperf and see what the traffic would be. Make sure you whitelist the traffic as well.

  • entropicdrift@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    9 months ago

    Who is your ISP? I had some issues with my FIOS ONT. Had to disable IPv6 on my router for it to stop dropping packets.