Hello everyone,
I hope I am not out of line in asking but I need your wisdom. I found some second hand Mellanox IS5022 8x 40Gb QSFP+ unmanaged Infiniband switch and I thought this might a good opportunity to upgrade our Beowulf cluster at my university department.
At the moment we use a regular ethernet switches, which are a bit too slow to conduct numerical computations. So I would love to upgrade to IB but our budget is a bit limited and state of the art IB is not cheap.
I am an aerospace engineer, so my knowledge of networking is a bit sparse.So What I am looking for is:
-
Are these switches suitable for numerical computations?
-
Advice on (cheap) cables and PCIe IB adaptors that would go together with these switches. (I have found some cheap QSFP+ passive cables on amazon.de and a IBM Infiniband QDR/FDR-10 QSFP 1-Port PCI-E-3.0x on a local IT outlet , but I am not sure if they are suitable for my needs)
-
Where I can find some introductory literature on IB relate to these switches
-
Any general advice
Thanks in advance,
No comments:
Post a Comment