Publicado el mayo 31, 2017
Everyone has read the news. Mobile data is on the rise, driven by an ever-growing number of new apps and a video content streaming explosion. Furthermore, LTE-A and 5G promise to fuel this growth by bringing more bandwidth to the mobile device. According to the most recent Cisco Visual Networking Index (VNI) Global Mobile Data Traffic Forecast Update, mobile data grew by 63% in 2016 to 7.2 exabytes per month by the end of 2016.
Of course, we have also heard how mobile backhaul and RAN network providers are preparing for this data onslaught by building cloud-based networks leveraging the latest developments in SDN and NFV technologies. These networks are being created to enable massive scale while, at the same time, addressing the performance demands of 5G, including gigabit to the mobile device and sub-millisecond, one-way latency. So, everything is okay, right? Or is it?
With a heavy reliance on one-way, sub-millisecond latency for 5G, how does a carrier ensure this performance throughout their network, including those carriers that are either pure SDN/NFV or a hybrid of SDN/NFV and traditional platform-based technologies? This level of performance monitoring has always required both ends of the path to have very well synchronized timestamps so that latency in each direction can be accurately measured. The problem is that most networks today do not support this ability and adding it ubiquitously throughout the network is both costly and complex. Rather, carriers rely on two-way, round- trip measurements, divided by two to estimate the one-way latency. While this has worked well enough for older technologies and services, LTE-A and 5G demand visibility into the real one-way delay metrics to ensure performance. Mobile services tend to be asymmetrical in bandwidth requirements, so it is likely that latency will be asymmetrical as well. Further, the signaling required to coordinate things such as multi-cell broadcast or to support applications such as self-driving vehicles all demand very constricted tolerances with regards to one-way latency.
So, will carriers be forced to invest in GPS solutions or packet-based timing protocols, such as IEEE 1855v2 to support highly precise, synchronized time stamping? The answer: not necessarily. EXFO’s Universal Virtual Sync feature, available on many of the EXFO Active Verifier models, can derive sub-millisecond, one-way latency metrics from standard two-way latency protocols, such as ITU-T Y.1731 or IETF RFC 5357 Two Way Active Measurement Protocol (TWAMP), without the need for a synchronized timestamp at the far end. In fact, the remote site can be any industry standard reflector for Y.1731 or TWAMP.
To find out more about EXFO’s Universal Virtual Sync feature, visit our website or download the following white paper: Measuring one-way delay for optimal delivery of revenue-generating services.