I'm pondering a thought experiment on a lab environment:
I want to add a dedicated 10G link between a server and workstation to facilitate high speed transfers of vm images, and skirting round prohibitively expensive switch upgrades from 1G. Both devices have to remain connected to the switch. Server is running ESX so I toyed with just letting it do the switching, but it seems ESX won't do frame forwarding in the vswitch, so L2 connectivity upstream doesn't work.
If I add a dedicated card to each machine and connect them up - is there a clean way to get them to talk without kippering up standard network access? My areas of confusion are (and showing how limited my networking knowledge is):
- You can't add existing IPs to the extra nics
- Different IP addresses means you're not going to get the "fast" nic address from DNS, so would have to override locally
- If on the same subnet, packets might not leave by the correct nic
- If on a different subnet, broadcasts wouldn't work correctly
- Broadcasts generally would announce the main nic for services, so services wouldn't appear on the fast address
I'm sat here scratching my head thinking that this really isn't a possibility. I think it might be doable if you're doing an explicit connect to a service (e.g. a SCP transfer) where you can target the other nic, but otherwise it wouldn't really work. Any thoughts to aid my inevitable insomnia?
No comments:
Post a Comment