17–20 Jun 2018
Leuven, Belgium
Europe/Brussels timezone
On-site registration will be possible on Monday, June 18, 08:30 to 10:00

Re-Thinking Reliability Analysis

18 Jun 2018, 14:00
25m
Oral Implementation of Radiation Hardening on analogue circuits at cell-, circuit-, and system design level Evaluation and Qualification

Speaker

Mr Art Schaldenbrand (Cadence Design Systems)

Description

Device reliability analysis has evolved little since the release of BERT in 1989, yet the expectations of circuit reliability have changed. While it would be easy to say that the requirements for automotive reliability are the sole factor driving this change, the expected reliability for every application have increased. The average car does not have to sustain the 24 hour a day, 7 day a week that the communication chips in data center need to sustain. The ability of a chip to operate reliably for the expected lifetime of the product is now a major design consideration. There are several challenges when performing reliability analysis. First, there is a need for predictive models. The models that predicted the behavior of micron planar CMOS transistors are not sufficient to predict the degradation for nanometer FinFET transistors. Another consideration is the need to redefine what we mean when we say reliability analysis. Historically BERT and its successors have focused on the device degradation due to electrical stress. This simplistic approach ignores the other factors that can accelerate device degradation, including, device temperature and process variation. Finally, consideration is how reliability simulation is performed. Historically, designers performed a reliability simulation based on their circuit verification testbench run with worst case power supply and voltage conditions. This approach leaves many gaps in the verification, for example, does device degradation cause the device age more quickly or slowly over time? The current approach for analyzing device reliability has not kept with the challenges designers face resulting increased risk of field failures, the very thing reliability analysis is intended to eliminate. In this paper we will explore each of these challenges and then approaches for overcoming them. The first challenge we identified was the need for more predictive device models. Since the original Lucky Electron Model, LEM[], was developed and implemented in BERT more advanced models have been developed, single electron excitation, SEE[], and multiple vibrational excitation, MVE[]. These models offer improvement over the LEM approach but are not sufficient now to support the new device structures required for advanced node designs, for example, FinFET transistors. However, new aging models have been proposed, Xie, et al [], that provide better prediction of HCI induced degradation, as well as, providing more predictive estimation of the BTI induced degradation and recovery. This model is extensible allowing for a unified aging for both legacy and advanced node reliability analysis. The next challenge is to overcome is the definition of reliability analysis. We will look at re-defining reliability analysis so that all the factors that contribute to device degradation are considered together not in isolation. This approach solves the problem of accuracy and creates a new challenge effort, that is, in order to improve the quality of results for reliability analysis, many more simulations will be required. For example, to estimate the effect of process variation on device aging, we will need to run a Monte Carlo simulation and then perform aging analysis on the results. While this simulation is expensive it has a couple of benefits. The first benefit is an accurate estimation of the device degradation due to process variation. The second benefit is that we will understand the design margin, so we can avoid overdesign that results in un competitive products or underdesign that increases the risk of field failures. Another important point is that these improvements in the simulation methodology will result in additional requirements on model extraction. In addition to process variation, approaches for including other phenomena that accelerate device degradation in the analysis will discussed. The final topic will be exploring is the application of mission profiles to reliability analysis. In the context of reliability simulation, a Mission Profile, is a reliability corner. Consider an op-amp, in normal usage the output may swing around the mid-range of the supply voltage and increasing the power supply voltage increases the stress on the transistors but does not significantly impact lifetime. Yet if in the application when the op-amp is switched of and the output floats to VDD, full supply voltage would be applied to the compensation capacitor potentially causing it to fail in a much shorter time. Using Mission Profiles allows us to explore how the different competing degradation mechanisms impact the design in order to assure that under all conditions, the device lifetime requirements will be satisfied.

Summary

In this paper, we will demonstrate how the gaps in the current reliability analysis verification methodology can be closed. The result is more accurate prediction of the effect of different stresses on device degradation, as well as, a better understanding of design margin. The result is that designers will be better able to manage the risk device degradation has on product lifetime.

Primary author

Mr Art Schaldenbrand (Cadence Design Systems)

Presentation materials