Why are my endpoints reporting packet loss greater than 100%?
One wouldn't think that packet loss beyond 100% is possible (it is indeed very counterintuitive), and on a sustained interval, it is certainly not possible to maintain an audio or video call with such packet loss rates. However, when we break it down to the nuts and bolts it may actually be possible to 'achieve' well beyond 100% packet loss for very short intervals.
Let's assume the following
- The audio channel rate on a particular call is 64,000 bytes/second
- Packets are 1,500 bytes in size
- Available network bandwidth is 500 Mb / second
- Latency for audio / video is 0.1 Seconds
- Endpoint has aggressive packet loss recovery algorithm that will burst packets for a very short interval until acknowledged from the far end or until half the latency interval has passed.
Based on these assumptions
- The normal packet transmission rate for this call (assuming no packet loss and packet size of 1500 bytes) is 43 packets/second
- If ALL packets are lost for 0.05 seconds and re-sent repeatedly and 500 Mb / second is utilized in the transmission in this data, then 16,500 packets could potentially be lost, versus the original payload which was only comprised of 2 packets.
Theoretical questions aside, it was noted that in our data receiver logs between certain time interval over 6,500 packets were reported as 'lost' by the endpoint versus the 500 packet payload during this interval.
As for what Vyopta can do, we have already requested that such packet loss rates be capped at 100% so as to avoid this very confusing conversation in the future. However, these episodes of extreme packet loss are most likely occurring even if they do not manifest themselves to the end user that is consuming audio/video on customer's network.