Hello everyone,
I am currently doing some undergraduate network research and I am halted in this particular problem. I cannot find a logical reason to pinpoint why the TCP throughput is lower when the CPU is under stress.
Basically, I've been studying containers and testing the throughput among various types of networks and how they differ from another at a low level when using containers since they are easier to handle.
Let's say there are two containers both on the same network, one as a client and one as a server. I began using the stress-ng package to stress out the allocated CPU to 100% on the server container to check if the TCP throughput is affected by the lack of CPU resources. It is.
Into normal circumstances let's say the network bus communication is about 34 Gbps in this particular network. When using the stress-ng on the server it gets about 25 Gbps but the latency does not get too much affected (no more than 15ms on average).
Any hints I can look in more depth to find the culprit into this particular situation?
No comments:
Post a Comment