February 2, 2011 Leave a comment
Here my notes and observations from day 3 of Designcon.
This paper contained some research into codes that would work in multi-gigabit serial systems.
One thing mentioned is that DFE equalizers tend to multiply single bit errors into burst errors. The effect of this error propagation on BER is limited, however it has a big effect on the mean time to false packet acceptance (MTTFPA) for packets protected by CRC32. The general standard is that the MTTFPA should be longer than the existance of the universe (1e10), and burst error propagation can reduce MTTFPA to below this, even though it is still quite high.
Many codes that have been successful in other radio applications are not attractive for serial copper because they are too complex, require too much processing overhead, and have too much latency. Codes that were mentioned in the presentation were Reed Solomon and cyclic codes (RS(264,260,2) in particular).
In their analysis they assumed perfect equalization of channel, so the Gaussian noise channel assumptions were used for all coding gain calculations.
One thing I learned is a method that has been used in some Ethernet standards to add the coding bits without adding any additional overhead. The 64/66b code has been used, which uses 2 bits as a packet identifier to identify a packet as data or control. This packet identifier can be compressed to a single bit, so the extra bit can be used as a code bit. Multiple 66b words can be concatenated together to form entire code words with all of the necessary code bits.
The use of interleaving was mentioned to increase burst error correcting capability.
Synchronization was discussed, where block synchronization would be done by checking parity bits of received data and then bit slipping if it is not error free.
This is a paper I would like to read in more detail.
This was my paper presentation (copresented with Paul Schad). There were approximately 33 people present for the presentation. Here were the questions that we were asked afterwards:
- How did you handle independent input variables of voltage and current for control? Were they controlled independently?
- Would a HW floating point unit in the FPGA be attractive for this application?
- Why did the loop rate increase from 2 KHz to 250 KHz?
- What was the end application?
- Could an FPGA based PID controller be used in switch mode power supply design?
- What langauge did you implement design and simulation in?
It is possible to use statistics to predict the # of switching events at the output based on statistics of switching events at the input. This only gives the average power noise, and an estimate of the peak is really needed to design the power distribution system.
To determine the peak, you need to know the statistical distribution of power per clock cycle to find the mean and standard deviation of the power. Then a reasonable estimate can be made for the peak.
The focus of this paper was to develop a computationally fast method of finding this distribution.
Keynote – Ivo Bolsens (Xilinx)
He talked about the new crossover devices that bridge the gap between the ASSP and the FPGA. These are programmable system on chip devices that include processors, memories, and programmable logic. There are many advantages to this type of device including tight integration between CPU and HW or between multiple CPUs, and fast and flexible time to market. The drivers of this new cross over technology are both monolithic chips and 3D system in package. Overall I found his presentation compelling, but the fundamental flaw in programmable logic of high unit cost was not really addressed. As long as the unit cost is multiples higher than ASSP and SOC processor devices, it is hard to see a dominant place for programmable logic except in high performance applications.
I only stayed for the very first part of this. I was really curious about different signal modulations being used. The author revealed early that he only considered NRZ and PAM4. I was hoping for a wider consideration of this and there was another interesting presentation I wanted to see.
This was presented by someone from Altera. They first attempted to integrate a measurement circuit with their multi-gigabit transceivers in Stratix 4 at 40 nm and presented this as a paper last year . This first generation only measured the horizontal eye opening and I don’t think it was actually made commercially available in these devices. This year they are presenting a full 2D on die scope for Stratix 5 in 28 nm. It works using the BERT Scan method (like a BERTScope or JBERT) and sweeps the clock sample position to scan in the horizontal direction and uses high speed comparators to get amplitude in the vertical dimension. It uses an embedded BER pattern checker to evaluate whether the bit is correct at each decision threshold. Here are some other interesting points.
- 32 horizontal steps, 64 vertical steps
- Works with a single fixed PRBS7 pattern
- Good for up to 12.5 Gbps
- Stratix 5 – will be available on every transceiver in every device
This technology sounds similar to the Vitesse patented V-Scope that I wrote about in a past article. I wasn’t sure how the Vitesse solution worked, so I stopped by their booth, and they explained that it was a very similar BERTScan technique.
SI simulation was originally focused on verifying setup and hold timing as part of a worst case design flow. The SNR was so high in these systems that the BER’s were so small that they weren’t even worth worrying about. As data rates increased, SI simulation has become exceedingly more complicated in a world where nearly everything matters. Simulations must be run across manufacturing tolerance variation of dozens of parameters. It is impractical to spend this much time in simulation. These authors from Intel have developed a methodology where they convert transmission channel S parameters to an equivalent Gaussian noise SNR value, also taking into account things such as transmitter signal strength, jitter, and noise. The concept is to estimate the SNR. In prelayout analysis this method can be part of a budgeting process. There are post layout tools that can analyze every net and create insertion loss and cross talk S parameters. Even if the tools aren’t perfect, the process still gives a relative quality comparison between nets. A PCB designer can intuitively learn how to fix outlying nets, and the process requires limited SI simulation. They reported that they have successfully used the methodology on real designs with DDR style memory busses.
This method relies on imperfect simulation tools, but instead of striving to build perfect simulations, uses the tools at their current capabilities to get enough information to make design trade offs for the board designer. This methodology could be easily built into EDA layout tools. This sounds like a very promising option to bring a reasonable signal integrity methodology for multi-gigabit design back down to an intermediate skill level.
This panel was really focused on who the IC designer of the future is, not system designer like what I am involved in. The main point seemed to be that the RTL chip design methodology is running out of steam and that design must occur at a higher level of abstraction by engineers having a cross functional mix of application expertise, software expertise, and hardware expertise. I didn’t stay for the whole thing.
This was a panel represented with a semiconductor manufacturer (NXP), distributer (Digi-Key), PCB fab vendor (Sunstone Circuits), Assembler (Screaming Circuits), EDA vendor (National Instruments). They discussed a partnership that they have formed to make the process of designing, fabricating, and assembling a board much more seamless for the design community. Some examples:
- Schematic symbols and footprints available for all tools so companies don’t need to have their own library groups
- Pricing information matches data book part numbers
- Simulation models for all components
- One stop shopping for PCB fabrication and assembly
Probably one of the more interesting things I learned from this was just the existence of Screaming Circuits. They provide very fast turn assembly of printed circuit boards.
 Ding, Weichi, Mingde Pan, Tina Tran, Wilson Wong, Sergey Shumarayev, and Mike Peng Li. January 2010. “An On-Die Scope Based on a 40-nm Process FPGA Transceiver.” Designcon 2010 Conference Proceedings. [Internet, WWW, PDF] Available in PDF format: http://www.altera.com/literature/cp/cp-01066-on-die-scope.pdf. [Accessed Feb 8, 2011].