Quote:
Originally Posted by zx10guy
As an experiment, I installed the workhorse of business class 10Gig NICs (Intel X520) into my gaming PC running Windows 8.1 (now upgraded to Windows 10) to see if it would work. To my surprise it did without having to play driver roulette.
Some of the activities I do which is more about my IT hobby and work is moving around large ISO and OVA/OVF files building out new virtual machines in my vSphere cluster. Being able move around/load those files with 10Gig connectivity saves tons of time.
In addition to that, my MD3800i iSCSI array requires 10Gig connectivity. I haven't leveraged this but will when I get my 2 Cisco UCS240 servers up and running is FCoE...more specifically leveraging the features of DCB (data center bridging).
Also, I have a few connected devices running on my PoE access switch where that traffic gets distributed to other parts of my network. If I ran that through even bonded 1Gig links, performance would be worse than running over a single 10Gig link. I have 10Gig running between this switch and the top of rack switch in my server rack.
Truth be told, the server I run 24/7 is my physical host for about 24 VMs of which about 8 VMs run 24/7. Because of the traffic going in and out of these VMs, the server is 40Gig attached to the top of rack switch.
|
Going to run VGPU in those C240s?