- [Author: Yang et al.]
- [Keywords: Network measurement]
Existing measurement solutions mainly focus on a good trade-off among accuracy, speed and memory usage. Only the state-of-the-art UnivMon  pays attention of generality (using one sketch to process many tasks), and of course make great trade-off among the 4 dimensions. But traffic characteristics (available bandwidth, packet arrival rate, flow size distribution) can vary drastically.
It makes good trade-off among accuracy, speed, memory usage, and generality. However, they do not focus on one fundamental need: achieving accurate network measurements no matter how traffic characteristics(available bandwidth, flow size distribution, and packet rate) vary.
It is desirable to design an elastic data structure which can dynamically allocate appropriate memory size for elephant flows. It requries the sketch to be elastic: adaptive to bandwidth, packet rate and flow size distribution. Besides, three other requirements in measurements: generic, fast and accurate.
- Adaptive to bandwidth: compress and merge sketches
- Adaptive to high packet rate: only record elephant flows
- Adaptive to number of elephant flows: dynamically increase memory size for heavy flows
- Measurement tasks: keep necessary info for each packet while discard IDs of mouse flows
- Platforms: both hardware and software version proposed as well as P4 version.
Implement this on six platforms: P4, FPGA, GPU, CPU, multi-core CPU and OVS.
CAIDA4  trace is used.
Evaluation metrics are:
- Average relative error (ARE) of estimated flow sizes (nubmer of packets)
- F1 score in terms of heavy hitter (flows whose sizes are larger than a threshold) and heavy change detection (flows whose sizes in 2 adjacent time windows increase or decrease beyond a threshold)
- Weighted mean relative error (WMRE) to estimate flow size distribution
- Relative error (RE) to estimate accuracy of entropy of flow sizes and cardinality estimations (number of flows)
Memory and Bandwidth Usage
It is observed that for flow size estimation, the bandwidth usage of Elastic is always less than its memory usage. The reason that the bandwidth usage does not significantly outperform the memory usage is that Elastic itself has achieved extremely high accuracy and thus the compression algorithm cannot easily improve it further.