How RMS jitter (time domain) is quantified based on standard deviation of a Gaussian distribution

Period jitter is the short term variation in clock period compared to the average (mean) clock period. If the average or reference period is To (see Figure 1), then we take samples of period jitter as T1 - To, T2 - To, T3 - To, .... and so on until we reach 10,000 samples (JEDEC standard JESD65B). Plotting these jitter samples as a histogram may well result in a Normal distribution (see Figure 2). One standard deviation 1σ each side from μ (dark blue) accounts for about 68% of the jitter samples. While ±2σ from μ (medium and dark blue) account for about 95%, and ±3σ (light, medium, and dark blue) account for about 99.7%. The Normal distribution yields two common jitter specifications:
• Root Mean Squared jitter (RJRMS) or the value of one standard deviation σ. Since this value hardly changes as the number of samples increases, it is considered a more meaningful measurement. However it is only valid in pure Gaussian distributions (no deterministic jitter).
• Peak-to-peak jitter or the distance from the smallest to the largest measurement on the normal curve. In most circuits this value increases with the number of samples taken (see Figure 3). To arrive at a meaningful value of peak-to-peak jitter, the bit error ratio (BER) also needs to be specified. Refer to application note AN-815 for more details. For other questions not addressed by the Knowledge Base, please submit a technical support request.


Title Other Languages Type Format File Size Date
Application Notes & White Papers
AN-815 Understanding Jitter Units Application Note PDF 476 KB Apr 23, 2014