[This the third part of a series of several posts about bandwidth estimation for IP based video calling.]
- Part 1: NetSense 101: Why do we need bandwidth estimation?
- Part 2: NetSense 101: Packet-loss-based Bandwidth Estimation
- Part 3: NetSense 101: Delay-based Bandwidth Estimation (this post)
- Part 4: NetSense 101: Q&A
If you just bumped into this post – I suggest you read the first two parts of this series. It all boils down to this:
- We have bandwidth
- It fluctuates in its availability dynamically, each and every second
- We need to know how much of it we have to do video calls
- The most natural way of doing it is looking at packet losses, but this isn’t the best of ways
- If we could know the bandwidth we have / are going to have in a moment, it will allow us to reduce / increase our media’s bitrate and fit it to the network. This means we will have better video quality at a lower latency
How do we achieve that prescient knowledge? Being able to predict what the network is going to be like in the very near future?
We call it NetSense.
NetSense is a technique of sensing the current state of the network and estimating out of it how much effective bandwidth do we have available.
To understand how it works, think about switches and routers for a moment: Network switches have their own internal queues. They receive packets, store them internally for an instant, they decide where to route them next and then they send them out and clear them from their internal queues. If a switch gets too much packets at a given point in time – his internal queues will fill up, and he will start throwing out packets, causing a packet loss (=congestion).
As a thumb rule, the more packets a switch has in its queue, the longer these packets will take to get routed to their next hop, increasing their latency.
And here lies the whole idea of NetSense: it monitors for changes of the delay between media packets that are received, and out of it tries to deduce what happens in the switches along the route of the media. If it finds out that a switch somewhere starts accumulating packets in its internal queues – NetSense will re-estimate available bandwidth and act accordingly – without getting to the point of experiencing packet losses.
Here’s a diagram illustrating how NetSense works – you can compare it for the blueprint provided for the packet loss technique in my previous post.
What do you gain from NetSense?
- Better video experience, as there are no packet losses
- Lower latency, as NetSense tries to reduce congestion on the network