The accuracy of Telecom Boundary Clocks (T-BCs) is essential to the successful roll-out of LTE-A and TDD-LTE. To meet the new G.8273.2 compliance limits, T-BCs must meet a very stringent constant Time-Error (cTE) limit of 20 or 50 nanoseconds depending on device class. This ensures that the maximum number of nodes can be deployed within the network’s Time-Error budget.
This Test Guide (below) shows how the Calnex Paragon-100G can be used to test T-BC compliance as per G.8273.2 and provides procedures to measure noise generation, time noise tolerance and transfer, packet layer transient response and holdover performance. One of the main drivers for the whole network at the moment is 5G. Study Group 15 has been asked to look into the development of the transport network for 5G, including the backhaul and access networks as well as the synchronisation. While 5G isn’t planned to roll out until 2020 and beyond, some operators are planning field trials this year, and need equipment in place ready for those trials.
The ITU’s Study Group 15 met in Geneva again this month. Study Group 15 is the group responsible for the optical transport and access networks, including synchronisation. Once again, synchronisation was the biggest single topic, with over 80 contributions from many different companies.
The accuracy of Telecom Boundary Clocks (T-BCs) is still essential to the successful roll-out of LTE-A and TDD-LTE. With that in mind, we recently revised our application notes on Boundary Clocks.
This application note describes what a boundary clock is, and explains the critical performance parameters defined in G.8273.2. Watch out for an accompanying application note - 'Measuring Time Error Transfer of G.8732.2 T-BCs' which we will publish by the end of the month.
The interest in Time Sensitive Networks (TSN) - and its evolution - is clearly of great interest in a number of industry sectors as was demonstrated both in the papers presented at this year’s TSN/A conference in Stuttgart, as well as the diversity in background of delegates.
Calnex was at the 2017 ISPCS Plugfest & Symposium this year. The event started with a Plugfest where multiple vendors plugged and tested their Precision Timing Protocol (PTP or 1588) equipment. This is a valuable open session where vendors and researchers from different communities can perform real-world connections and tests, which contribute to their development and research processes.
For a long time now, operators have been asking about how to use PTP to transfer time across their existing networks. Vendors say it’s possible, but the standards are not there. At least, until now. The ITU-T have just taken a big step forward with the agreement of G.8271.2 at their June meeting. What this standard does is define the requirements on the network for PTP to be able to work accurately over existing networks.
Following on from the ITU meeting I recently attended, a bit more detail on the Fronthaul topic.
In 4G, the “fronthaul” concept was born, separating the baseband unit from the radio unit, and connecting them using dedicated fibre. A protocol was invented, CPRI (Common Public Radio Interface) to carry what was basically the radio signal over the fibre.The radio unit simply had to modulate that signal onto the carrier, so it could be very simple and cheap.
STAC (Securities Technology Analysis Center) arrived in London after stops in Chicago and New York and as always with these events, there was as a local focus. The impending MiFID 2 directive, in particular the trade timestamping requirements inevitably played a prominent part in proceedings. In fact timing, sync and latency papers dominated the first half of the event.
The Best Master Clock Algorithm (BMCA) is run by all PTP ports in a system, and is a distributed algorithm. Initially all ports send out Announce messages advertising their capabilities, and from that each port and each clock determines which source to synchronise to. For a master-capable port, that may include itself, in which case it becomes a grandmaster.
What value could an SME get from attending a suited & booted funfair with almost 100,000 others?
Mobile World Congress, a forum of just about anything with a connection to the mobile network has seen a surge of interest, relevance and of participation in recent years.
Excessive jitter impacts the ability of clock recovery circuits to recover the clock properly which can lead to mistiming inside transmission equipment when data is regenerated. When timing errors becomes large, bit errors are introduced leading to excessive packet loss. Jitter is generally expressed in terms of Unit Intervals where a Unit Interval (UI) equals one bit time of a digital NRZ binary signal.
For the first blog of the new year, I’m going to look back at some of the significant events in the synchronization world from 2016. The year got off to an interesting start with a reminder that we can’t always trust GPS. Back in January, the GPS system started putting out the wrong time. This wasn’t caused by jamming or spoofing, but was a configuration error caused by trying to de-commission an old satellite.
The ITU approved two new recommendations on synchronisation this weekend. G.8272.1 is the first “enhanced” clock specification aimed at meeting the requirements for 5G mobile infrastructure. The document specifies the enhanced PRTC (Primary Reference Time Clock), basically a very high accuracy GNSS timing receiver, capable of delivering time to within 30ns of UTC.
I just got back from Prague after the ITSF 2016 conference. If you weren’t there, you missed another great event in a beautiful city. You can always book for next year, when the conference will be in Warsaw, Poland, of if you can’t wait that long, there is always the WSTS conference in San Jose, California in the first week of April.
I recently had a question on “Ask Tim” about a great sync knowledge source - a website called 'Sync University'. The questioner found it extremely useful with lots of great content but was now unable to find it anywhere.
I recently had a question on “Ask Tim” about how to distinguish between a packet-based frequency signal, delivered using the PTP profile in G.8265.1, and a Synchronous Ethernet signal delivered using the Ethernet physical layer.
Virtual Birth? About 15 years ago, I remember reading a book on the future of telecoms. At the time, the 3G mobile system was just in development, and the ultimate 3G speed was projected to be 384 kbit/s (compared to the 56kbit/s I was getting to my house).
What is PTP?
PTP stands for “Precision Timing Protocol”, and is described in IEEE Standard 1588. It is a protocol for distributing time across a packet network. It works by sending a message from a master clock to a slave clock, telling the slave clock what time it is at the master.
Jitter has been around for as long as the telecommunications industry has been trying to shift bits and bytes from A to B. Jitter is not cool, it’s unloved and nobody wants it and just like the bore at an office party it won’t go away. Now, just to be clear, we are talking about physical layer jitter here
I am just finished talking with the CEO of Calnex, Tommy Cook who had just completed a series of customer visits in Japan and China. The chat amongst the operators about 5G being just over the horizon is starting. Tommy went on to say ...
Quote of the day: “Clock sync is a pain, princess. Anyone who tells you differently is selling you something.” Neil Horlock, at the MiFID II Workshop on 26th May 2016 . This was a workshop on how to meet the timestamping requirements set out in MiFID II, the latest European legislation on financial markets.
There’s a buzz around the topic of Time Sensitive Networking at present. It is being linked with the “Industrial Internet of Things” (IIoT), although it is not exclusively about industrial networks.
The concept began with audio and video distribution as the “Audio/Video Bridging” group of IEEE, and is now being extended to cover industrial
Following on from my post “What is Time?”, a clock is simply a device that counts regular events from a common starting point. That applies to all clocks and calendars, with the possible exception of a sundial! The regular events might be days, months and years, or they might be pendulum swings, quartz vibrations, or atomic transitions.
ITU's Study Group 15, the body associated with transmission and networks, met in Geneva recently. One interesting statistic I heard during that meeting was that Question 13, the synchronization sub-group, receives as many contributions as some entire study groups. Synchronization is far and away the biggest “Question” within ITU.
The only thing that's standard is confusion! Why does ITU say this, and IEEE say that? What on earth is MEF doing? I see Small Cell Forum are in on the act? Why can’t 3GPP sort out what they want?
I saw a cartoon recently that explained neatly why we have so many different standards for the same thing ...
Your watch says you have a minute to go. You walk into the meeting room all set, and a sea of angry faces look up at you, saying “Where have you been? You’re late!” What went wrong?
The answer is your watch – it was 5 minutes out. This is called time error. It is the difference between the time reported by a clock (or watch), and the reference clock (for example, the clock on the meeting room wall).
IEEE1588 (2008) is a huge standard, 269 pages long. It defines the Precision Time Profile (PTP), a protocol for distributing time aver a packet network. Thing is, types of packet networks are 10 a penny these days.
There are industrial networks, power networks, telecom networks, audio networks, video networks, in-car networks just to name a few. All have subtly different requirements and therefore IEEE1588 contains loads of different options and features that simply aren’t appropriate for every network.
One of the biggest drivers behind the renewed interest in time and synchronization is the mobile industry. The latest generation of mobile technology requires that the basestations are not only synchronized in frequency, but in time too. This is because many of the techniques used for increasing capacity in the mobile
Traditionally, timing and synchronization has been slow moving, pedestrian, and not very exciting. Not much to write about! However, that has changed in recent years. I attended my first ITU-T meeting on synchronization back in 2004.
We had seven attendees and eleven contributions to consider. Now, we often get 35 – 40 attendees
The ITU has produced over a dozen standards to do with time and synchronization over the last ten years. Why so many? Partly it’s a matter of evolution, and partly it’s a matter of purpose.
The first standard released back in 2006 was G.8261, and it covered “general aspects” of frequency distribution in packet networks. Time wasn’t a topic back then. G.8261 evolved into a set of standards for frequency distribution:
Since this blog is all about time, it might be useful to try and explain what time is. So this is me trying to do what countless philosophers, theologians and physicists have been attempting to do for centuries.
GSA confirms 393 LTE networks launched, year end forecast raised, LTE-Advanced and VoLTE deployments booming April 9, 2015: 393 operators have commercially launched LTE in 138 countries. This is according to data released today by GSA (Global Mobile Suppliers Association) in the latest update of the Evolution to LTE report.
Want to know when Tim adds new blogs? Look out for RSS feeds shown below and click to keep informed of updates to that page. Of course we think all of Tim's blogs are great so would always recommend 'All Blogs'.