Skip to main content
Published on

Measuring the pulse of your network — Part 1

Synchronization has been a fundamental requirement in digital networks since the introduction of PDH systems in the 1980s, through the rollout of SDH/SONET in the 1990s to today’s optical transport networks (OTNs). Synchronization distribution has been widely available through the physical layer of all of these technologies, and equipment vendors and network operators alike have had the luxury of utilizing this to transmit time and timing information from the core to the edge.

The increasing demands on network operators to supply richer service offerings at more competitive rates has forced a convergence of network technologies to packet-based methods. This has caused a paradigm shift in the availability of physical-layer synchronization—moving away from the circuit-switched world to that of connectionless operation. The requirements for stable time and timing still exist for many applications and nodes within the packet-switched network, and new technologies have been devised to accommodate transport synchronization over higher layers through packet networks. Therefore, not only is testing and monitoring of synchronization performance carried out at the physical layer, now the packet-layer performance has to be tested and monitored too.

Even as service providers move towards an all-packet network, the footprint of time-division multiplexing (TDM) services and circuits is still a major part of the network. TDM transport continues to represent significant revenue for service providers and carriers and must still be supported during the transition phase. Therefore, synchronization must be maintained in the TDM portion of the network and essentially implemented in the packet portion of the network to ensure smooth operations and interoperability between both domains.

The growth of wireless services is driving the push for Ethernet as a replacement technology in backhaul networks. However, synchronization is a must for cellular and wireless operation, as base stations must be synchronized in order to hand off calls between base stations, to minimize dropped calls and ensure proper billing—all reflected in customer satisfaction.

Network and Synchronization Evolution


Digital Network

SDH Transport

Broadband Services



Voice + Data

Voice + Data + Multimedia

Voice + Data + Multimedia





AII-IP signaling





(Routin LDP)






Access Link

2W loop

2W loop/
dial-up FLC

2E loop/xDSL Cable/

All kinds of
access types

Synchronization Profile

› Central primary reference
clock (PRC)
› Distribution over copper E1
› Local node clock phase
following oscillators
› Non-redundant
› Protection 1:N

› Distributed primary reference sources (PRS)
› G.812 type I, II, V or VI filtering/holdover oscillators
› Remote management
› Distribution impaired by SDH payload pointers/created
timing islands

› New requirements:
- Protection 1+1
- Time of day NTP/PTP/UTI
- Remote management
› NGN transport technologies create
even more timing islands
› Network monitoring, management,




2010 —>

Synchronization Basics

Synchronization can be defined as the coordinated and simultaneous relationship between time-keeping among multiple devices. For people outside of the telecom world, synchronization typically refers to time synchronization where one or more devices have the same time as a reference clock, typically the universal time clock (UTC); when synchronized, two devices will have the proper time of day (ToD) in reference to the universal time reference, regardless of their geographical location.

However for network engineers, synchronization has a very precise and critical use. Telecom networks, such as SONET and SDH networks, are based on a synchronous architecture, meaning that all data signals are synchronized and clocked using virtually the same clock throughout. This ensures that all ports that carry data do so at the same frequency or with very little offset, and therefore, network throughput is deterministic and fixed for a specific transport rate.

Ethernet on the other hand is an asynchronous technology where each Ethernet port has its own independent clock circuit and oscillator. Because each port is clock independent, frequency offsets between interconnected ports can be relatively high. To solve this issue, Ethernet devices typically implement buffers that can store traffic and then mitigate the effect of offsets between two ports. Therefore, telecom networks require two other types of synchronization in addition to time synchronization, that is, frequency synchronization and phase synchronization.

Frequency synchronization is typically a physical synchronization where the output clocks between devices is synchronized. When two devices are frequency-synchronized, they basically generate the same number of bits over an integration period (typically 1 second). When they are not frequency synchronized, one device will generate more bits per second than the other, which can cause overflow and eventually bit errors or traffic loss.

Phase synchronization refers to the simultaneous variation of clocks between devices. When phase-synchronized, the two devices will shift at exactly the same time from one clock pulse to the other. A real-world example would be to compare two watches side-by-side. When synchronized, these two watches will increment at exactly the same time. When unsynchronized, one device will count faster than the other, and in the network world, these variations are the equivalent of phase offset.

Synchronization Technologies in the Network

Legacy Frequency Synchronization

Modern telecom networks, such as SONET and SDH, are synchronous networks where all transmission is based on a common clock source. These technologies implement hierarchical levels of clock accuracy where a highly precise clock feeds other clocks—each node connects and synchronizes itself to the clock with the highest accuracy.

In SDH, a highly precise, cesium-based master clock, referred to as the “primary reference clock” (PRC), is distributed all over the networks through data signals by synchronizing the output bit clocks via each node’s equipment clocks (SECs). As the clock accuracy degrades with each hop, network nodes known as synchronization supply units (SSUs) are dedicated to regenerating the clock signal, thus ensuring that all nodes remain synchronized to a primary rate. SONET employs the same synchronization mechanism but uses a different terminology: from stratum 1 (highest accuracy) to stratum 4 (lowest accuracy).

Packet Synchronization

As the network moves toward Ethernet as the transport technology of choice, synchronization remains a major issue. As Ethernet and TDM technologies continue to coexist, technologies like circuit-emulation services (CES) provide capabilities to map TDM traffic on Ethernet infrastructure and vice versa, enabling a smooth changeover for network operators transitioning to an all-packet network.

To interconnect these two technologies, frequency synchronization is key, since the TDM technologies have frequency offset tolerances that are much more restrictive than the asynchronous Ethernet technologies. Ethernet relies on inexpensive holdover oscillators and can stop transmitting traffic or buffer data, while TDM technologies rely on the continuous transmission and presence of synchronization reference. Synchronous Ethernet solves these issues by ensuring frequency synchronization at the physical level. Ethernet SyncE achieves frequency by timing the output bit clocks from a highly accurate stratum 1 traceable clock signals in a fashion similar to traditional TDM and SONET/SDH synchronization. SyncE supports the exchange of synchronization status messages (SSM) and now includes a newly introduced Ethernet synchronization messaging channel (ESMC), which ensures that the Ethernet node, with SyncE enabled, always derives its timing from the most reliable source.

However, since SyncE is a synchronization technology based on layer 1, it requires that all ports on the synchronized path be enabled for SyncE. Any node that is non SyncE-enabled on the path will automatically break the synchronization from this node. This is an issue for network providers that have a multitude of Ethernet ports between the primary synchronization unit and the edge device that needs synchronization, as all the ports must be SyncE enabled to synchronize to the edge. Such requirements can increase the cost of deployments as hardware and software upgrades can dramatically increase the total cost of ownership. SyncE also only focuses on frequency synchronization and does not guarantee phase synchronization—although the phase requirements can be somewhat assessed via SyncE.

The next packet synchronization technology, the Precise Time Protocol (PTP) is specifically designed to provide high clock accuracy through a packet network via a continuous exchange of packets with appropriate timestamps. In this protocol, a highly precise clock source, referred to as the “grand-master clock”, generates timestamp announcements and responds to timestamp requests from boundary clocks, thus ensuring that the boundary clocks and the slave clocks are precisely aligned to the grand-master clocks. By relying on the holdover capability and the precision of the integrated clocks in combination with the continuous exchange of timestamps between PTP-enabled devices, frequency and phase accuracy can be maintained at a sub-microsecond range, thus ensuring synchronization within the network. In addition to frequency and phase synchronization, ToD synchronization can also ensure that all PTP-enabled devices are synchronized with the proper time, based on coordinated universal time-clock (UTC).

The advantage of PTP is that since PTP is a packed-based technology, only boundary and slave clock needs to be aware of the nature of the packets and therefore, synchronization packets are forwarded as any other data packets within the network. This flexibility reduces the cost of ownership as the main upgrade to the networks are limited to synchronization equipment contrarily to the SyncE approach that requires both synchronization equipment and upgrade of all Ethernet ports on the link to SyncE specifications.

The major weakness of PTP is also due to its packet nature. As the synchronization packets used by PTP are forwarded in the network between grand master and hosts, they are subject to all network events such as frame delay (latency), frame-delay variation (packet jitter) and frame loss. Even with the best practice of applying high priority to synchronization flows, these synchronization packets will still experience congestion and possible routing and forwarding issues, such as out-of-sequence and route flaps. The host clock’s holdover circuit must be stable enough to maintain synchronization where the synchronization packets experienced network events.

As for ToD synchronization, protocols such as Network Time Protocol (NTP) ensure that customers are correctly updated with the time-of-day information based on a standard universal time source. NTP, and its different versions, distribute time and day information periodically to customers, such as personal computers and network devices, while ensuring corrections for geographic locations. ToD synchronization is typically achieved via a connection to Internet time servers, over the air through radio signals or via GPS synchronization.

Purpose of Synchronization Testing/Monitoring


Frequency Synchronization

Phase Synchronization

Time of Day

Legacy frequency (TDM)




Synchronous Ethernet




IEEE 1588v2 Precise Time Protocol




Network Time Protocol




As clock wander typically occurs over a long period of test time, synchronization metrics also must be adapted for long periods of testing in conjunction with a stable and highly accurate clock source as reference. Synchronization metrics typically consist of three key measurements, the time interval error (TIE), maximum time interval error (MTIE) and time deviation (TDEV).

  • TIE is a basic measurement of the phase difference between the reference clock and the clock under test, based on the time difference between significant events. This basic measurement, performed over many hours or days of tests, provide immediate offsets between the clocks. Because of its instantaneous nature, this measurement is not ideal for the long-term but provides an assessment of the peak offsets of phase variations, which typically lead to failures.
  • MTIE is a measurement based on the TIE data designed to provide the maximum deviation of the peak-to-peak value of the TIE within, by widening the observation window. Typically produced after processing the TIE data, the MTIE provides the worst-possible TIE change within different observation windows and can be used to predict the stability of the clock frequency over time.
  • TDEV is another measurement derived from TIE data and provides the average phase variations of the clock by expressing the root mean square (RMS) of the variations of the MTIE for specific measurement windows. As MTIE is focused on the worst case, any peak variation will limit the visibility of small variations. TDEV on the other hand averages the worst peak variations and provides a good indication of the periodicities or TIE offsets. TDEV provides information about the short-term stability of the clock and the random noise in the clock accuracy.

Newer metrics are now being introduced to provide better visibility of the accuracy of the clock. Metrics such as the modified allan deviation (MDEV), although useful, are typically used in lab applications for specific frequency stability measurements and are not typically used in field scenarios. The shift to packet-based synchronization has also led the industry to define new metrics, maximum average time interval error (MATIE) and maximum average frequency error (MAFE), to better characterize the frequency and phase errors due to packet events such as packet-delay variation. Note that these metrics are still under study by the various synchronization committees.

In addition to these metrics, international standards committees have released guidelines that describe the acceptable levels of performance for synchronization in telecommunications network. These guidelines aim to define the acceptable performance limits of network equipment with the final aim to help ensure trouble-free synchronization when the network is in full service with this equipment deployed. Synchronization testing using performance masks is therefore a key step in the deployment of maintenance of the network and must ensure that metrics are within these ranges in order to ensure trouble-free transmission.

Additional Packet Metrics

With the introduction of PTP, network operators must now qualify new packet metrics based on the PTP architecture. In PTP, since packet synchronization is performed via an exchange of messages, the synchronization flow is therefore sensitive to the presence or absence of messages due to frame-delay variation and frame loss. The PTP flow will be affected by congestion, link failures and queuing due to high traffic flow, just like any other service which in turn can affect the accuracy of the synchronization between the boundary or slave clocks and higher-quality clocks.

Moreover, messages are exchanged unidirectionally, meaning that nodes will exchange and terminate the synchronization packets. This induces the concept of unidirectional performance as a direction can experience more network events than the other directions. Asymmetrical behavior may cause synchronization packets to experience more delays, congestion and possible loss in one direction, while the other direction remains trouble-free.

For such reasons, PTP testing involves not only testing the phase and frequency characteristic of the timing, but also the performance of the synchronization packet flow due to network resources and network events. The flow direction of the PTP’s key performance indicators must be independently assessed for performance to produce reliable results as well as to understand the relationship between time synchronization and packet events.

Various standards have been defined as the network evolved from TDM to a packet solution. From the test and metrics perspective, the standards committees are also studying new metrics specifically for packet-based synchronization technologies. The ITU-T Study Group 15 Question 13 are working on new series of standards and metrics such as the G.826x series, which define frequency distribution and performance, and the G.827x series, which define phase/time distribution and performance over packet-based synchronization technologies.

Part 2 of this article examines the tools and techniques required to assure the successful implementation and ongoing support of synchronization systems and also introduces EXFO’s test solution for network synchronization and examine the various aspects and advantages of this solution.