How to develop a 10 Gb connector
Wednesday, 05 April, 2006
The design process for a high-speed connector is coloured by the experiences of the individual designer and his/her exposure to different aspects and requirements of the products he or she has developed in the past. Typically, senior level project engineering will have been exposed to connector products across many markets and applications.
Marketing input created a developmental challenge for a new 10 Gb I/O connector that demanded more speed and density in an existing package configuration.
The initial design question was "What would it take to make the SFP (z-axis pluggable) style connector capable of 10 gigabit differential transmission?".
From a connector perspective, the main goal was to use a known connector technology, but greatly increase its intrinsic data transmission capability.
From an application perspective, it enabled the use of existing form factors while increasing the density of high-speed signals. Some of the other design guidelines were:
- No major changes to the basic interface, same card thickness, slot
- height and card tongue size;
- No major restriction on a new mounting footprint (SMT), though, as
- always, board space must be conserved;
- Propose alternative versions of the connector, including a vertical and a special recessed configuration.
The connector, like the SFP, was not be subjected to direct mechanical loading in application. Stress isolation was accomplished by using a separate guide/shield assembly with latching mechanism.
Separating the electrical and mechanical structures allowed for more flexibility in design and helped to control overall cost by simplifying the connector assembly.
This project used experience garnered on earlier high-speed design and modification projects. Once the concepts were defined, three in all, a ranking method was created to compare basic mechanical and electrical performance with manufacturing/cost impact.
Once the basic design approach was chosen, the challenge was to resolve the mechanical and manufacturing issues without compromising the expected electrical performance.
As with many pure developmental programs, until a real market opportunity arrives, the priority and support for the program is not high. A new user opportunity quickly changed the priorities. Proposals were needed for a new connector for the External PCI Express interconnect.
Cost, speed and density were all critical requirements. With the incentive of an immediate market potential, resources were assigned and product development launched.
Before things could really get under way, one potential barrier needed to be resolved. This was the question of market acceptance of a proposed new footprint configuration, which had one row of partially hidden SMT leads. The existing SFP design had all of the SMT leads readily visible and re-workable.
Users have long resisted solder joints that are difficult to inspect or repair; however, connectors with SMT attachment had become much more common.
The introduction of dense, BGA attachment had driven improved inspection methods.
Would partially hidden SMT leads now be acceptable?
Review with both users and contract manufacturers, based on the pitch of the product and past SMT attachment experience, indicated that this would not be a problem. The green light was given to develop the concept fully.
The development of a high-speed connector requires that signal integrity performance be an integral and continuous component of the design process. This mandated that an engineer with a signal integrity background iteratively analyse the design, from the conceptual stages to the final production design, including the development of tooling.
The results of these analyses were continuously communicated back to the engineering team to form a closed-loop design process. Models were developed for electrical analysis. Each section was evaluated for performance and tuned, some structures in composite dielectric (mostly plastic) and others in air dielectric.
Based on electrical modelling, the model needed to optimise the length of the retention feature as well as contact lead-in, mating PCB pad dimension, SMT pad and foot attachment, as well as the transitions from SMT attachment to beam retention area. The aim was to minimise or eliminate 90° turns.
At high speeds, it is convenient to see the signal paths through the connector as transmission lines. Transmission lines consist of a series of small circuit elements distributed throughout the line, as opposed to being represented by large lumped circuit elements.
The appropriate size of these elements must ultimately be determined as a function of the highest application data rate that the connector is targeted to support. The PCIe connector supported Generation 1 (3.125 GBps) and Generation 2 (5.0 GBps) speeds but was analysed at 10 GBps speeds. This allowed for higher resolution of SI features, smaller elements in the connector, as well as ultimately providing an extended application life of the connector.
The goal of the PCIe connector design was to make the signal integrity characteristics of the connector transparent, relative to the overall channel performance.
Several physical features of the connector mitigated the effects of typical SI characteristics. The short electrical length of the PCIe connector (approximately 60 ps), aided in providing a low insertion loss and a low accumulation of far end crosstalk. The high degree of symmetry achieved with stamped terminals inserted into moulded housings, and the choice of a GSSG signal assignment, aids in providing isolation through the connector, therefore minimising crosstalk.
The eventual design decision to route the terminals predominately in air also helped in providing minimal insertion loss. Consequently, the most critical SI performance benchmark of the connector was impedance management or return loss.
Impedance management was analysed primarily in the time domain during development but was also continuously validated in the frequency domain. If the other SI characteristics had required the same degree of focus, they could be addressed in the same closed loop design process. Again, the design basis for the PCIe connector development was the SFP/XFP connector.
Working from a previous design as a basis, the signal integrity engineer is allowed to use the lessons learned to highlight areas for improvement, as well as identify areas that are sensitive to change. Treatment of each of these areas as elements in a transmission line optimised the total development time of the PCIe connector.
Each area can be modelled, analysed and tuned individually to provide the best impedance management throughout the connector (see Figure 1). In addition, modelling these areas separately saved field-solver use time. Ultimately several models can be concurrently solved and analysed to find the optimum mechanical vs signal integrity performance. From past experience with the SFP/XFP connectors, the use of stub features to retain stitched terminals in a plastic housing provided the largest opportunity for improvement from an impedance management standpoint.
Stubs in a transmission path contribute excess capacitance, creating a localised impedance dip. Stubs were embedded in plastic to retain the terminal in the housing. The stub created by the retaining barb in the upper row of terminals had already been modified to allow the SFP connector to support XFP applications.
The localised impedance was taken from a ~70 ohm value at 2.5 GBps rise-times to better than 90 ohm at 10 GBps rise-times. The lower row of terminals was extremely space constrained, significantly complicating improvement.
As the current SFP and XFP applications did not contain high-speed signals on these terminals, the decision was made not to improve them at that time.
Sometimes it is necessary to alter an entire design paradigm to overcome performance issues with an over-constrained region. The PCIe connector was being designed for a new program and was not yet limited by physical specifications. Thus the program provided the opportunity and the incentive to improve the lower row of terminals.
Historically, the lower connector row was constrained by the need to retain it, and route the signal from the specified SMT pad location to the specified contact location. This was accomplished by using a terminal with a hairpin turn and an approximately 1.5 mm long retention barb.
Hairpin turns have a tendency to create a localised excess capacitance as a result of the tight turn acting like a stub. Historically, these two capacitance contributors led to large impedance dips that impeded the support of a channel performance of even 2.5 GBps in the lower row of terminals. These constraints were eliminated by changing the design paradigm from having the surface-mount pads on both sides of the connector, to one where they are all on the same side. This eliminated the hairpin turn and increased the connector terminal length. It allowed a greater degree of freedom in the design of the retention features and facilitated further improvement in top row impedance management.
The next area to be addressed was the impedance through the quasi-uniform connector terminal structure between the SMT pad and the contact beam. A majority of this region was designed for air dielectric, while small portions were still captured in plastic.
Simple field-solver models were created to determine the proper terminal widths for each region. Through successive use of these models, it was possible to tune the impedance/return loss performance of the terminal in both the air and plastic portions of the dielectric. The result was a very well impedance-managed structure.
Empirical data shows that structure maintains impedance within 100 ± 5 ohm at 10 GBps application rise-times. That the final terminal width needed to improve impedance also helped minimise the overall insertion loss. The terminals are two to four times wider than typical printed circuit board traces.
In addition, the use of air as the primary dielectric provided a substantially lower-loss tangent than any printed circuit board material.
The contact beam region was then addressed. This needed to be thin enough to deflect and provide the appropriate normal force to ensure a reliable electrical connection, without incurring excessive permanent set. The thinness of the contact beams tends to create excessive local inductance. This could have been addressed by adding some plastic between the terminals to create additional capacitance. Widening the terminal was not an option because it would have adversely affected the mechanical performance of the contact beam. Since the impedance in that region was not that high, and at lower speeds it served to offset low-localised impedances in neighbouring regions, it was left alone.
Finally, the ends of the contact beams are usually bent at an angle to provide a functioning guide or 'lead-in' for the mating board to prevent damage from 'stubbing'. These areas were made as short as possible so that they still performed their mechanical function while optimising electrical performance. In the end, a connector was created that was essentially transparent to the transmission path. As a value-added element, the host and module boards were also evaluated. Specifically, the dimensions of the pads were optimised electrically by tightening the mechanical tolerances. Also, the location of the ground plane underneath these pads and its effect on signal performance was evaluated, and a minimum distance between the two was recommended.
Future work needs to be done in the following areas:
- Controlling signal integrity requirements for unintended modes of propagation, identifying alternate low-loss and lower dielectric materials;
- Further reduction in mechanical tolerances to minimise pad dimensions. Reducing mechanical tolerances is critical to achieving performance at 10 GBps and beyond, as it permits improved electrical optimisation of the geometry involved;
- Test fixturing and standards development must be improved. Connector specifications are often wrapped around a specific connector. This often leads to over specification of a connector in some areas.
Once the individual elements of the transmission channel are through the iterative design process and are refined to the point of individually meeting the design requirements for 10 Gbps, cable assemblies and fixtures with prototype connectors were designed and manufactured to test the entire system.
There were two goals when doing system level testing. The first was to verify that the channel as a whole meets all the design requirements, from insertion and return loss, to crosstalk and jitter. The second was to test the channel under stress, similar to that of the intended use. Completing both would allow the test engineer to go back to the designer and end user with a high level of confidence that the connector and cable meet or exceed the performance requirements of 10 Gbps.
Before the test engineer can even begin working in the lab, he must put together fixturing and equipment with performance exceeding 10 Gbps. If the fixtures are too lossy and the equipment's bandwidth too low, the results will generally be poor. For system level testing of this channel, the fixture was designed with high performance SMA connectors and board launches, high-speed material, short trace lengths and modelled via structures to the SMT pads. This was done to ensure the edge rates of TDR pulses and data from pattern generators would be as fast as possible. The test fixture was quite capable of being used for pulse measurements such as TDR impedance and TDT crosstalk.
The rise time degradation of the incident pulse is quite small, so there would be enough bandwidth to make 10 Gbps eye pattern measurements, but it was a good idea to verify this using a network analyser. If the bandwidth of the test fixture was less than the fundamental frequency of the data rate, the eye-pattern would exhibit a significant increase in jitter and decrease in eye height. It would not have been possible to determine if the fixture had enough bandwidth by measuring the eye pattern through the 2x calibration structure.
Because of the extremely short rise-times associated with 10 Gbps compared with the length of the physical features of the channel, the board launch, mating interface and cable termination could no longer be treated as a single discontinuity. Also each discontinuity, if present, was typically within the same order of magnitude as the others - there was no single discontinuity that dominates the TDR impedance plot. Even if the impedance discontinuity for each was within the design requirements, the return loss for the entire system may not meet the requirements as defined in an industry standard.
Another challenge of designing and testing a channel at 10 Gbps was that many of the industry standards were turning to return loss as the reflection specification of choice. Return loss is a better gauge of the channel as a whole, but difficult to use when refining a design and correlating data to the physical features of the channel.
This meant that the test engineer must be skilled at working with both TDR impedance and return loss data and be willing to test recursively, back and forth between impedance and return loss data, to fully characterise the channel. This same test methodology is also applicable to characterising crosstalk.
There are many challenges in characterising a channel for operation up to and exceeding 10 Gbps. Several recently adopted industry standards use multi-aggressor or stressed eye pattern tests to assess losses due to mismatch, high frequency attenuation and crosstalk.
The benefits of these tests include being able to measure channel losses due to mismatch, high frequency attenuation and crosstalk with one method. This reduces the time required to characterise a channel when the required and expected performances are known. The main difficulty with the method is that it requires many independent pattern generators to be used as crosstalk sources. For this particular 10 Gbps channel, five crosstalk sources were required to get accurate data, three were near-end crosstalk sources and two were far-end crosstalk sources. This can make this method prohibitively expensive for some.
Luckily, new software is available that allows test engineers to calculate the eye pattern, with and without crosstalk sources, once the channel's S-parameters and crosstalk have been adequately characterised using network analysers or TDNA packages. There was also the potential to use evaluation boards from SERDES vendors or designing custom boards for this method of testing. The design process is iterative and non-trivial, using a broad range of skills and tools to arrive at an optimised solution. Analysis of the final product configured for electrical performance creates another set of challenges. Evaluation methods that more accurately reflect true system needs (eye patterns) will better assist in overall product development. They may be more difficult and expensive to use, but will yield benefits in overall product performance and reduced time to market.
You are here
The benefits of using cellular services and Wi-Fi to keep track of valuable goods.
A game changer for building robust distributed systems
Researchers have developed a new distributed algorithm that solves one of the key performance and...
A guide to implementing remote monitoring
Remote monitoring enables people and companies to collect data from locations where conventional...