this post was submitted on 25 Nov 2023
21 points (95.7% liked)

Selfhosted

39985 readers
757 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi,

I've got a network setup consisting of 3 proxmox servers of which two have a 10Gbe interface. All interfaces are RJ45 copper ports.

So far I've got two qnap switches with 2 10Gbe and 5 2.5Gbe portss. They are connected to each other via one of the 10Gbe ports, and one of the servers on the other one.

This setup is working flawlessly, iperf measurements show 9.4gbit/s in both directions.

Recently I tried to increase my network for future expansion and bought a trendnet switch with 5 10Gbe ports.

Weird problems occurred and as they were also described in some amazon reviews I returned the switch and bought another one, this time a 4port 10Gbe switch from ubiquity.

Again there are problems. This time one direction seems to be at 10gbit/s as expected while the other direction (between the same two servers) is limited to 1gbit/s. The connection to the third server shows the same problem (one direction 2.5gbit/s and the other one limited to 1gbit/s).

All wires are connected correctly and as there is no problem on the qnap switches I do believe that there is no hardware problem on the cable/NIC side.

However I'm not sure if this is this just bad luck or some deep network problem I don't understand.

Maybe someone has an idea?

you are viewing a single comment's thread
view the rest of the comments
[–] bigredgiraffe@lemmy.world 8 points 11 months ago* (last edited 11 months ago) (2 children)

Can you draw a picture of how you have all 3 switches connected with all of the wires? I am suspicious that you are creating a switching loop or spanning tree isn’t picking the optimal link on accident so I’m curious.

[–] vettnerk@lemmy.ml 3 points 11 months ago (1 children)

I was thinking the same thing. Spanning tree is love. Spanning tree is life.....when deployed correctly.

Alternatively I'm thinking noise, as I've seen that in 10gig connections a few times, which is why I prefer LC fiber where possible.

[–] bigredgiraffe@lemmy.world 1 points 11 months ago

Oh yeah for sure, every time I’m like “it can’t be spanning tree” it is spanning tree. Do you mean copper vs fiber? LC connectors can carry a variety of speeds but generally yeah I try to use fiber or DAC cables which are shielded wherever I can.

[–] tmjaea@lemmy.world 1 points 11 months ago (2 children)

It's just two switches.

Server 1


10Gbe


ubiquity switch


10Gbe


qnap switch


10Gbe


server 2.

[–] perslue@lemmy.ca 1 points 11 months ago

I'll ask some basics:

Is the ubiquity switch the flex-xg? Make sure you're using the 10G ports as the poe port is only 1G.

I'm assuming all the ethernet cables are rated for 10G?

Are any of the switches or NICs manually configured to negotiate at a lower transmit rate?

That's all I've got, good luck.

[–] bigredgiraffe@lemmy.world 0 points 11 months ago* (last edited 11 months ago)

So then it doesn’t work across the ubiquity switch just to double check? If so, you will need to enable jumbo frames on that for sure and it is not enabled by default and that could also explain the throughput as it is having to fragment then defragment the frames to cross the switch or iperf is using MSS to determine that it can only send 1500 byte frames, your slower speed is about line rate for 1500 byte frames no matter the speed of the actual link.

ETA: you can verify this by pinging with a large size and setting the “do not fragment” flag, so something like ‘ping -s 2000 -M do ip.addr ’ on Linux, windows uses different flags.