40 Gig LAN - Why did I even do this...

40 Gig LAN - Why did I even do this...

Raid Owl

2 года назад

36,048 Просмотров

Ссылки и html тэги не поддерживаются


Комментарии:

@computersales
@computersales - 11.01.2024 12:08

Crazy ro think 100Gb is becoming more common in homelabs now and 10Gb can borderline be found in the trash. 😅

Ответить
@MK-xc9to
@MK-xc9to - 05.01.2024 04:22

Budget Option : HP Connect X 3 Pro cards ( HP 764285-B21 10/40Gb 2P 544+FLR QSFP InfiniBand IB FDR ) , payed 27 Euro for the first 2 and now they ere down to 18 and i bought another 2 as spare parts , they need an Adapter from LOM to PCIe , thats why they are cheap , the Adapter costs 8 -10 Euro (PCIe X8 Riser card for HP FlexibleLOM 2Port GbE 331FLR 366FLR 544FLR 561FLR ) a n d you get the PRO Version of the Mellanox Card = ROCE 2.0 . Besides TrueNAS Scale supports Infiniband now and Windows 11 pro as well = you can use it , its not that much faster but the latency is way lower . Its about 1-2 GB with the 4 x 4 TB NVME Z1 array . HDDS ~ 500MB , smaller Files way less ( as usual )

Ответить
@esra_erimez
@esra_erimez - 01.01.2024 22:24

I can't wait to try this myself. I ordered some ConnectX-3 Pro EN cards

Ответить
@inderveerjohal7218
@inderveerjohal7218 - 31.10.2023 03:33

any way to do this for Mac? Off an UNRAID server?

Ответить
@Bwalston910
@Bwalston910 - 26.10.2023 11:12

What about thunderbolt 4 / USB4?

Ответить
@Veyron640
@Veyron640 - 11.09.2023 06:13

I have Ferrari..
But, would I want you to have it??

Absolutely Not! lol
Thats kind of the tone of this vid on the receiving end.

Ответить
@Veyron640
@Veyron640 - 08.09.2023 03:51

you know ... there is a saying.. right?

"there is NEVER enough speed"

so... give me 40
Give me fuel
Give me fire..

ghmm
the end.

Ответить
@jamescox5638
@jamescox5638 - 26.08.2023 01:48

I have a Windows server and a juniper EX 4300 switch that has the QSFP+ ports on that back. I have only seen them used in a stack configuration with another switch. Would I be able to buy one of these cards and used the QSFP+ ports the switch as a network interface to have 40G connection with my server? I ask cause I am not sure if these QSFP+ ports on my switch is able to be used as a normal network port like that others.

Ответить
@bgk93
@bgk93 - 14.08.2023 06:05

Love your humour

Ответить
@charlesshoults5926
@charlesshoults5926 - 25.05.2023 20:52

I'm a little late to the game on this thread, but I've done something similar. In my home office, I have two Unraid servers and two Windows 11 PCs. Each of these end points have Mellanox ConnectX-3 cards installed, connected to a CentOS system acting as a router. While it works, data transfer rates are nowhere near the rated speed of the cards and DAC cables I'm using. Transferring from and to NVMe drives, I get a transfer rate of about 5Gbps. A synthetic iper3 test, Linux to Linux, shows about 25Gbps of bandwidth.

Ответить
@minedustry
@minedustry - 16.04.2023 16:09

Take my advice, I'm not using it.

Ответить
@psycl0ptic
@psycl0ptic - 10.04.2023 17:45

These cards are also no longer supported in vmware.

Ответить
@noxlupi1
@noxlupi1 - 23.03.2023 19:25

Windows network stack is absolute bs.. But with some adjustments you should be able to hit 35-37Gbit on that card. It is the same with 10gbit, by default it only gives you about 3-4gbit in windows. But you can get it to around 7-9gbit with some tuning.

It is also dependent on the version of windows. Windows server is doing way better than home and pro.. And workstation is better, if you have RDMA enabled on both ends.
Good places to start are: Frame size / MTU (MTU 9000 - jumbo frames is a good idea when working with big files locally) Try Turning "large send Offload" off, on some systems the feature is best left on, but on others it is a bottleneck. Also interrupt moderation is on by default. On some systems, this can be good to avoid dedicating too much priority to the network, but on a beefy system, it can often boost network performance significantly, if turned off. If

If you want to see your card perform almost at full blast, just boot your PC on an Ubuntu USB, and do an iperf to the BSD nas.

Ответить
@MrBcole8888
@MrBcole8888 - 05.03.2023 05:44

Why didn't you just pop the other card into your Windows machine to change the mode permanently?

Ответить
@anwar.shamim
@anwar.shamim - 23.02.2023 18:58

its great

Ответить
@IonSen06
@IonSen06 - 22.02.2023 09:59

Hey, quick question: Do you have to order the Mellanox QSFP+ cable or will the Cisco QSFP+ cable work?

Ответить
@JavierChaparroM
@JavierChaparroM - 18.02.2023 22:31

Re-visiting a video I once thought I would never be able to re-visit, haha Im trying to set a Proxmox cluster with network storage, and oddly enough in 2023 40gbps stuff is almost as cheap as 10gbps stuff

Ответить
@MM-vl8ic
@MM-vl8ic - 01.02.2023 13:36

I've been using these for a few years....look into running both ports on the cards, auto share RDMA/SMB .... VPI should let you set the cards for 56Gbs ethernet.... test I set up 2 ram disk 100GB and speeds were really entertaining.... Benching marking NVMe gen3 was only a tick slower or the network......

Ответить
@jackofalltrades4627
@jackofalltrades4627 - 23.01.2023 17:59

Thank for making this video. Did your feet itch after being in that insulation?

Ответить
@cyberjack
@cyberjack - 16.01.2023 11:06

network speed can be limited by drive speed

Ответить
@UncleBoobs
@UncleBoobs - 11.01.2023 13:58

im doing this with the card in infiniband mode, using the IP over infiniband protocol (IPoIB) running openSM as the subnet manager, im getting the full 40g speeds this way

Ответить
@paulotabim1756
@paulotabim1756 - 02.12.2022 00:11

I use Connectx-3 pro cards between linux machines and a Mikrotik CRS326-24S+4Q+RM.
Could achieve transfer rates of 33-37Gb/s between the directly connected stations.
Did you verify the specs of the pcie slots used ? to achieve 40Gb/s must be pcie 3.0 x8. 2.0 x8 will limit at 26Gb/s .

Ответить
@K0gashuk0
@K0gashuk0 - 15.11.2022 06:17

Please dont use the word "neech" it is like using the word "shuttered." These are not real words and were coined to make people sound smart during covid when all the actual smart people knew the entire thing was BS.

Ответить
@vincewolpert6166
@vincewolpert6166 - 03.11.2022 22:45

I always buy more hardware to justify my prior purchases.

Ответить
@ChristopherPuzey
@ChristopherPuzey - 20.10.2022 09:40

Why are you using public ip addresses on your LAN?

Ответить
@R055LE.1
@R055LE.1 - 14.10.2022 19:20

People need to stop saying "research". They've been researching for hours. No you haven't. You've been studying. You didn't run real scientific experimentation with controls and variables, you read stuff online and flipped some switches. Most people have never conducted research in their lives. They study.

Ответить
@XDarkstarXUnknownUnderverse
@XDarkstarXUnknownUnderverse - 30.09.2022 16:29

My goal is 100GB...because why not and its cheap (I use Mikrotik).

Ответить
@marcin_karwinski
@marcin_karwinski - 14.09.2022 08:34

Frankly, since you're not doing any switching in between the devices, instead opting for direct attached fibres, I'd say go with IB instead... IB typically nets better latencies at those higher speeds, and for directly accessing, as in working of the net disk in production, this might improve the feeling of speed in a typical use. Of course, this might not change a lot if you're only uploading/downloading stuff to/from the server before working locally and uploading results back onto the storage server, as then burst throughput is what you need and IB might not be able to accomodate any increase due to medium/tech max speeds. On the other hand, SMB/CIFS can also be somewhat limiting factor in your setup as on some hardware (as in CPU-bottlenecked) switching to iSCSI could benefit you more due to less abstraction layers in between the client and disks in the storage machine...

Ответить
@prodeous
@prodeous - 31.08.2022 16:42

I'm slowly working on getting my 10gb setup.. but 40 being 5x faster.. hmmm... lol. jokes aside thanks for sharing, seems like i'll stick to 10gb for now. Though I have cards iwth dual 10gb, so maybe i shoudl try for 20gb setup.

I know Unix/Linux/etc have such capabilitty, but Widnows 10 Pro doesn't.. any recomendations on how to link the two ports together?

Ответить
@Nelevita
@Nelevita - 18.08.2022 02:46

i can give you 2 tipps for youre 40gbit networkcards. 1 Use NFS for filetransfer its posibil easy to activate it in windows only the drive mounts must be every restart used as a startup thing. 2 if you realy realy realy need on youre local SMB use the Pro version of "Windows for Workstations" and use SMB Direct/Multicannel witch the cpu dosent get hit by network traffic there are some good tutorials out there even for linux.

Ответить
@seanthenetworkguy8024
@seanthenetworkguy8024 - 02.08.2022 09:40

what server rack case was that? I am in the market but I keep finding either way too expensive cases or ones that don't meet my needs.

Ответить
@SilentDecode
@SilentDecode - 02.08.2022 00:11

Why the strange subnet of 44.0.0.x? Just why? I'm curious!

Ответить
@Maine307
@Maine307 - 29.07.2022 22:34

WHAT ISP COMPANY PROVIDES THAT TYPE OF SPEED?? Here i am, just a few months into StarLink, having HughesNet for 8 years. ..I get 90s MPS download reliable now and i feel like i am king! How and who provides that much speed?? wow

Ответить
@thaimichaelkk
@thaimichaelkk - 28.07.2022 16:56

You may want to check your cards for heat. Many of the cards expect a rack mount case with high airflow, I believe your card requires Air flow: 200 LFM at 55° C, which the desktop case does not provide. You can strap a fan on the heat sink to provide the necessary cooling (Noctua has 40mm or 60mm which should do the trick nicely currently waiting for 2 to come in). I have a Mellanox 100GB nic and a couple Chelsio 40GB NICs(I would go with the 100GB in the future even though my current switch only supports 40GB) though they definitely need additional airflow after 5 minutes could cook a steak on them. Mikrotik CRS326-24S+2Q+RM is a pretty nice switch to pair with them for connectivity.

Ответить
@RemyDMarquis
@RemyDMarquis - 27.07.2022 20:51

I really was hoping that you found a solution for my problems. sigh
That 10GB cap is so damn annoying. I have been trying to find a way to get it to work but it just doesn't work with vrtio for me. If you check the connection speed in terminal "sorry I forgot which command" it will show that the connection is at 40gb. But no matter what I do I can't get the virtio to run at that speed.

One tip: If you want the DHCP server to give it an IP, do what I do. Bridge a regular 1gb lan with a port on the card and just use that bridge in the VM and connect your workstation to the same port. It will give you IPs for both machines from the DHCP server and you don't have to worry about the IP hassle. Of course you will be limited to the virtio 10gb but it is a piece of mind I'm taking until I can find a solution for that 40gb virtio nonsense.

And please hear my advice and don't even bother trying Infiniband. Yes it is supposed to be a better implementation and runs at 56gb but don't believe anyone that says it is plug and play, IT IS NOT. Any tiny adjustment you do to the network, it won't work anymore and you have to reboot both machines. I even bought a Mellanox switch and I gotta say, it is horrible.
I don't know about the modern implementations of it like on CX5 or CX6 but I don't believe it is really ready for the market as it is believed to be. Just stick to regular old Ethernet.

Ответить
@GanetUK
@GanetUK - 27.07.2022 07:35

Which edition of windows are you using?
As I understand it RDMA helps speeds when getting to 10G+ on windows and is only available in enterprise edition or pro for workstation (that's why I upgrade to enterprise).

Ответить
@TorBruheim
@TorBruheim - 26.07.2022 10:27

My recommendation described in 4 important things to prepare before you use 40GbE: 1) Enough PCIe lanes 2) Use a motherboard with a typical server chipset 3) Don't use an Apple MAC system 4) In windows set high priority to the background services instead of applications. Good luck!

Ответить
@LampJustin
@LampJustin - 25.07.2022 21:46

V2.0 would be using SRIOV to pass through a virtual function to the VM ;)

Ответить
@Alan.livingston
@Alan.livingston - 25.07.2022 16:31

Doing shit just because you can is a perfectly valid use case. Your home lab is exactly for this kind of thought project.

Ответить
@jeffnew1213
@jeffnew1213 - 24.07.2022 10:43

I've been running 10Gbit for everything that I could pop a 10G card into for a good number of years. The better part of a decade, actually. I started with a Netgear 8-port 10G switch. A few years ago I replaced that with an off-lease Arista Networks 48-port 10G switch (loud, hot, and power hungry). Last year, I replaced that with the new Ubiquiti 10G aggregate switch. That device has four 25G ports.

I have two 12th generation Dell PowerEdge servers running ESXi and two big Synology NASes, both of which are configured to, among lots of other things, house VMs. There are about 120 VMs on the newer of the two NASes, with replicas and some related stuff on the older box. Both of the PowerEdge servers and both NASes have Mellanox 25G cards in them with OM3 fibre in-between. ESXi and Synology's DiskStation Manager both recognize the Mellanox cards out of the box. So, now, I have a mix of 1G, 10G and 25G running in the old home lab. Performance is fine and things generally run coolly. Disk latency for VMs is very low.

Ответить
@SimonLally1975
@SimonLally1975 - 23.07.2022 14:32

Have you looked into Mikrotik CRS326-24S+2Q+RM, I know it is a little on the pricey side, or if you are going to go for this then the 100Gbps with this Mikrotik CRS504-4XQ-IN just for sh!ts and giggles. :)

Ответить
@tinkersmusings
@tinkersmusings - 23.07.2022 06:41

I run a Brocade ICX6610 as my main rack switch. I love that it supports 1Gb, 10Gb, and 40Gb all in one. I also run a Mellanox SX6036 for my 40Gb switch. It supports both Ethernet (with a license) and Infiniband through VPI mode. You can assign the ports that are Ethernet and Infiniband. Both are killer switches and I connect the SX6036 back to the Brocade via two of the 40GbE connections. Most of my machines in the rack now either support 40Gb Ethernet or 40/56Gb Infiniband. I have yet to run 40Gb lines throughout the house though. However, with 36 ports available, the sky is the limit!

Ответить
@npham1198
@npham1198 - 23.07.2022 03:16

I would change that 40.x.x.x network into something under the rfc1918 private address space!

Ответить
@7073shea
@7073shea - 23.07.2022 01:37

Thanks owl! The “transceivers gonna make me act up” bit had me dying

Ответить
@Darkk6969
@Darkk6969 - 22.07.2022 20:08

I need to point out that iperf3 is single threaded while iperf is muilti-threaded which makes a difference in throughput. It's not by a wide margin but figured best way to saturate that link.

Ответить
@nyanates
@nyanates - 22.07.2022 13:38

Because you can.

Ответить
@dmmikerpg
@dmmikerpg - 22.07.2022 11:27

I have it in my setup, like you it is nothing crazy, just host to host; namely from my TrueNAS system to the backup NAS.

Ответить
@meteailesi
@meteailesi - 22.07.2022 07:53

Sound got noise , u can clear the sound.

Ответить