-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About fluctuation of acked packets on report by ccp #2
Comments
Hi hsapkota, |
Hi Akshay, I was running iperf for the transfer. I am not sure about it but it doesn't look like the spikes were caused by the iperf. When I run PCC Vivace code in kernel for congestion control and use iperf to run transfer, the acked packets don't fluctuate in that case. I know I don't think the problem is with the constant CCP algorithm, but I am not sure what might be causing the fluctuation. So, just wanted to know your opinion on it and if you think iperf can cause spikes when you run the CCP transfers. Thank you for helping out. |
Can you let us know exactly how you're running the const algorithm? Are you also providing a rate or only a cwnd? Also what are you using to measure the acked packets in the plot? Is this what's reported by ccp or something else? I'm guessing it must be something else since you're comparing it to pcc vivace? |
Hi Frank, I am running const algorithm with fixed CWND of 100 packets without setting any sending rate. And I changed the reporting of the const algorithm to report Ack.bytes_acked and used MSS to calculate the acked packets in each interval. And the graph above in the first comment is reporting of iperf transfer using ccp_kernel. And as for the pcc vivace code, I ran the pcc vivace kernel version of the code (repo) and printed out the actual CWND set by the pcc vivace at each monitor interval (which usually is 1 RTT), and number of delivered packets after setting that CWND. The graph for the pcc vivace kernel version in the same environment of Emulab is in the figure below. In both cases, iperf was used to run the transfer. |
Hm, how are you gathering these measurements? Are you using a tcp tracepoint - i.e. using the same method to measure the acked packets for both the ccp-const flow and the pcc-vivace one? |
No, I am not using tcp tracepoint. I am just using the reported acked packets by both vivace and ccp const. For vivace I use the log printed in /var/log/syslog where as for ccp const I use the acked bytes reported by the Portus. |
Can you point us to the lines in the vivace code that are printing the statements you're using? Either way, it's not exactly an apples to apples comparison, because the acked bytes reported by portus are printed in userspace, batched into reports, while the vivace logs are printed from within the kernel. A more even comparison would to be to add a log statement in ccp_kernel that's similar to the vivace log (in terms of which function handler its printed from). Can I ask more generally about the context of what you're trying to do with this experiment? It's hard to say whether this is even the right metric to look at in the first place :) My guess is that the spikes in the ccp graph are just an artifact of how it's being measured. |
I am running ccp const algorithm in Emulab. The node where I am running this code has kernel 5.4.0-42 and bandwidth of the link is set to 100Mbps.

The figure below is for ccp const cwnd=100 and y-axis in the graph is number of acked packets in that rtt interval and x-axis is number of RTTs intervals.
As you can see in the figure, the average acked packets in total is close to the set cwnd, but there is some spikes in some reporting RTT interval.
I was wondering if this behavior is normal in CCP. If not do you know what might cause this kind of spikes?
The text was updated successfully, but these errors were encountered: