Mazur
Mazur
Tolerance analysis and synthesis of
assemblies subject to loading with process
integration and design optimization tools
By
Maciej Mazur
B. Eng. Mechatronic (Hons.)
A thesis submitted in fulfilment of the requirements
of the degree of Doctor of Philosophy
April 2013
School of Aerospace, Mechanical and Manufacturing Engineering
RMIT University
Melbourne, Australia
i
ABSTRACT
Manufacturing variation results in uncertainty in the functionality and performance of
mechanical assemblies. Management of this uncertainty is of paramount importance for
manufacturing efficiency. Methods focused on the management of uncertainty and
variation in the design of mechanical assemblies, such as tolerance analysis and synthesis,
have been subject to extensive research and development to date. However, due to the
challenges involved, limitations in the capability of these methods remain. These limitations
are associated with the following problems:
The identification of Key Product Characteristics (KPCs) in mechanical assemblies (which
are required for measuring functional performance) without imposing significant
modelling demands.
Accommodation of the high computational cost of traditional statistical tolerance
analysis in early design where analysis budgets are limited.
Efficient identification of feasible regions and optimum performance within the large
design spaces associated with early design stages.
The ability to comprehensively accommodate tolerance analysis problems in which
assembly functionality is dependent on the effects of loading (such as compliance or
multi‐body dynamics). Current Computer Aided Tolerancing (CAT) is limited by: the
ability to accommodate only specific loading effects; reliance on custom simulation
codes with limited practical implementation in accessible software tools; and, the need
for additional expertise in formulating specific assembly tolerance models and
interpreting results.
Accommodation of the often impractically high computational cost of tolerance
synthesis involving demanding assembly models (particularly assemblies under loading).
The high computational cost is associated with traditional statistical tolerancing
Uncertainty Quantification (UQ) methods reliant on low‐efficiency Monte Carlo (MC)
sampling.
This research is focused on addressing these limitations, by developing novel methods for
enhancing the engineering design of mechanical assemblies involving uncertainty or
variation in design parameters. This is achieved by utilising the emerging design analysis and
refinement capabilities of Process Integration and Design Optimization (PIDO) tools.
i
The main contributions of this research are in three main themes:
Design analysis and refinement accommodating uncertainty in early design;
Tolerancing of assemblies subject to loading; and,
Efficient Uncertainty Quantification (UQ) in tolerance analysis and synthesis.
The research outcomes present a number of contributions within each research theme, as
outlined below.
Design analysis and refinement accommodating uncertainty in early design:
A PIDO tool based visualization method to aid designers in identifying assembly KPCs in
early design stages. The developed method integrates CAD software functionally with
the process integration, UQ, data logging and statistical analysis capabilities of PIDO
tools, to simulate manufacturing variation in an assembly and visualise assembly
clearances, contacts or interferences. The visualization capability subsequently assists
the designer in specifying critical assembly dimensions as KPCs.
Computationally efficient method for manufacturing sensitivity analysis of assemblies
with linear‐compliant elements. Reduction in computational cost are achieved by
utilising linear‐compliant assembly stiffness measures, reuse of CAD models created in
early design stages, and PIDO tool based tolerance analysis. The associated increase in
computational efficiency, allows an estimate of sensitivity to manufacturing variation to
be made earlier in the design process with low effort.
Refinement of concept design embodiments through PIDO based DOE analysis and
optimization. PIDO tools are utilised to allow CAE tool integration, and efficient reuse of
models created in early design stages, to rapidly identify feasible and optimal regions in
the design space. A case study focused on the conceptual design of automotive seat
kinematics is presented, in which an optimal design is identified and subsequently
selected for commercialisation in the Tesla Motors Model S full‐sized electric sedan.
These contributions can be directly applied to improve the design of mechanical assemblies
involving uncertainty or variation in design parameters in the early stages of design. The use
of native CAD/E models developed as part of an established design modelling procedure
imposes low additional modelling effort.
Tolerancing of assemblies subject to loading:
A novel tolerance analysis platform is developed which integrates CAD/E and statistical
analysis tools using PIDO tool capabilities to facilitate tolerance analysis of assemblies
subject to loading. The proposed platform extends the capabilities of traditional CAT
tools and methods by enabling tolerance analysis of assemblies which are dependent on
ii
the effects of loads. The ability to accommodate the effects of loading in tolerance
analysis allows for an increased level of capability in estimating the effects of variation
on functionality.
The interdisciplinary integration capabilities of the PIDO based platform allow for CAD/E
models created as part of the standard design process to be used for tolerance analysis.
The need for additional modelling tools and expertise is subsequently reduced.
Application of the developed platform resulted in effective solutions to practical,
industry based tolerance analysis problems, including: an automotive actuator
mechanism assembly consisting of rigid and compliant components subject to external
forces; and a rotary switch and spring loaded radial detent assembly in which
functionality is defined by external forces and internal multi‐body dynamics. In both case
studies the tolerance analysis platform was applied to specify nominal dimensions and
required tolerances to achieve the desired assembly yield.
The computational platform offers an accessible tolerance analysis approach for
accommodating assemblies subject to loading with low implementation demands.
Efficient Uncertainty Quantification (UQ) in tolerance analysis and synthesis:
A novel approach is developed for addressing the high computational cost of Monte
Carlo (MC) sampling in statistical tolerance analysis and synthesis, with Polynomial
Chaos Expansion (PCE) uncertainty quantification. Compared to MC sampling, PCE offers
significantly higher efficiency.
The feasibility of PCE based UQ in tolerance synthesis is established through: theoretical
analysis of the PCE method identifying working principles, implementation
requirements, advantages and limitations; identification of a preferred method for
determining PCE expansion coefficients in tolerance analysis; and, formulation of an
approach for the validation of PCE statistical moment estimates.
PCE based UQ is subsequently implemented in a PIDO based tolerance synthesis
platform for assemblies subject to loading. The resultant PIDO based tolerance synthesis
platform integrates: highly efficient sparse grid based PCE UQ, parametric CAD/E models
accommodating the effects of loading, cost‐tolerance modelling, yield quantification
with Process Capability Indices (PCI), optimization of tolerance cost and yield with multi‐
objective Genetic Algorithm (GA).
To demonstrate the capabilities of the developed platform, two industry based case
studies are used for validation, including: an automotive seat rail assembly consisting of
compliant components subject to loading; and an automotive switch assembly in which
iii
functionality is defined by external forces and multi‐body dynamics. In both case studies
optimal tolerances were identified which satisfied desired yield and tolerance cost
objectives. The addition of PCE to the tolerance synthesis platform resulted in large
computational cost reductions without compromising accuracy compared to traditional
MC methods. With traditional MC sampling UQ the required computational expense is
impractically high.
The resulting tolerance synthesis platform can be applied to tolerance analysis and synthesis
with significantly reduced computation time while maintaining accuracy.
iv
DECLARATIONS
I certify that except where due acknowledgement has been made, the work is that of the
author alone; the work has not been submitted previously, in whole or in part, to qualify for
any other academic award; the content of the thesis is the result of work which has been
carried out since the official commencement date of the approved research program; any
editorial work, paid or unpaid, carried out by a third party is acknowledged; and, ethics
procedures and guidelines have been followed.
Maciej Mazur
April 2013
v
ACKNOWLEDGEMENTS
I would primarily like to show my gratitude for the extensive support of my supervisors Dr.
Martin Leary and Prof. Aleksandar Subic. Their expertise, input and advice have been
critically valuable throughout this research program.
Additionally, I would also like to acknowledge the support of the project Industry partners
SMR Automotive Australia and Futuris Automotive Interiors. Their assistance has provided
valuable industry relevant input into this research.
I would also acknowledge the financial support provided by the Commonwealth of Australia,
through the Cooperative Research Centre for Advanced Automotive Technology (AutoCRC).
I am particularly thankful to my family and friends for their continued support and
encouragement.
vi
PUBLICATIONS
The following publications are associated with this research:
Mazur M., Leary M., and Subic A. 2011. Computer Aided Tolerancing (CAT) platform for
the design of assemblies under external and internal forces. Journal of Computer‐Aided
Design. Volume 43, Issue 6 (June 2011), p707‐719.
Mazur, M., M. Leary, S. Huang, T. Baxter and A. Subic (2011). Benchmarking study of
automotive seat track sensitivity to manufacturing variation. Proceedings of the 18th
International Conference on Engineering Design (ICED11). S. J. H. Culley, B.J.; McAloone,
T.C.; Copenhagen, Denmark, The Design Society. Vol. 10: 456‐465.
Mazur, M., M. Leary and A. Subic (2010). Automated simulation of stochastic part
variation to identify key performance characteristics of assemblies. 6th Innovative
Production Machines and Systems 2010 (IPROMS 2010) Conference.
Leary M., Mazur M., and Subic A. 2009. An integrated case study of material selection,
testing and optimization. Proceedings of the 17th International Conference on
Engineering Design (ICED09). Norell Bergendahl, M.; Grimheden, M.; Stanford
University, USA. The Design Society. Vol. 8: 345‐356.
Leary M., Mazur M., and Subic A. 2009. The integration of algebraic material selection
and numeric optimization. Machine Design, Monograph University of Novi Sad, 2009.
Leary, M., M. Mazur, T. Mild and A. Subic (2011). Optimization of automotive seat
kinematics. Sustainable Automotive Technologies 2010: Proceedings of the 2nd
International Conference. S. Hung. Greenville, South Carolina, USA, Springer: 139‐144.
Leary, M., M. Mazur, J. Gruijters and A. Subic (2010). Benchmarking and optimization of
automotive seat structures. Sustainable Automotive Technologies 2010: Proceedings of
the 2nd International Conference. J. Wellnitz, Springer: 63‐70.
Leary, M, Mac, J, Mazur, M, Schiavone, F and Subic, A 2010, Enhanced shape memory
alloy actuators. J. Wellnitz, Sustainable Automotive Technologies 2010: Proceedings of
the 2nd International Conference, Springer‐Verlag, Berlin Heidelberg, pp. 183‐190.
Leary, M., J. Gruijters, M. Mazur, A. Subic, M. Burton and F. Fuss (2012). A fundamental
model of quasi‐static wheelchair biomechanics. Journal of Medical Engineering & Physics
34(9): 1278‐1286.
vii
TABLE OF CONTENTS
Abstract ............................................................................................................................... i
Declarations ....................................................................................................................... v
Acknowledgements ........................................................................................................... vi
Publications ...................................................................................................................... vii
Table of Contents ............................................................................................................ viii
List of Figures .................................................................................................................. xiii
List of Tables.................................................................................................................... xvii
Nomenclature ..................................................................................................................xix
List of Symbols ................................................................................................................. xx
1 Introduction ................................................................................................................. 1
1.1 Background and industry collaboration ......................................................................... 1
1.2 Introduction and motivation .......................................................................................... 1
1.3 Research scope and objectives ...................................................................................... 5
1.4 Research questions ...................................................................................................... 10
1.5 Methodology ................................................................................................................ 10
1.6 Key outcomes and contributions: ................................................................................ 12
1.7 Thesis Outline ............................................................................................................... 14
2 Literature Review ....................................................................................................... 15
2.1 Chapter summary ......................................................................................................... 15
2.2 Stochastic manufacturing systems .............................................................................. 19
2.2.1 Uncertainty ......................................................................................................... 19
2.2.2 Quality ................................................................................................................. 21
2.2.3 Quality loss and cost‐tolerance relationships .................................................... 22
2.2.3.1 Cost‐Tolerance models .................................................................................. 24
2.2.3.2 Cost of quality loss ......................................................................................... 25
2.3 Tolerance analysis ........................................................................................................ 25
2.3.1 Worst‐case and statistical tolerancing ............................................................... 27
2.3.2 Tolerancing schemes .......................................................................................... 27
2.3.3 Manufacturing variation distributions ............................................................... 28
2.3.4 Process Capability Indices (PCI) .......................................................................... 29
2.4 Tolerance modelling ..................................................................................................... 30
2.4.1 Manual tolerance charts ..................................................................................... 30
2.4.2 Parametric CAD based CAT ................................................................................. 32
2.4.3 Abstracted geometry CAT and multi‐variate regions ......................................... 34
2.5 Uncertainty Quantification (UQ) methods .................................................................. 36
viii
2.5.1 Sampling based methods ................................................................................... 38
2.5.1.1 Monte Carlo (MC) simulation ........................................................................ 38
2.5.1.2 Latin hypercube (LHC) simulation .................................................................. 39
2.5.2 Analytical methods ‐ Elementary ....................................................................... 40
2.5.2.1 Root Sum of Squares (RSS) method ............................................................... 40
2.5.2.2 Taguchi method ............................................................................................. 41
2.5.2.3 Other elementary analytical methods ........................................................... 42
2.5.3 Analytical methods ‐ Advanced .......................................................................... 42
2.6 Tolerance synthesis ...................................................................................................... 43
2.6.1 Optimization ....................................................................................................... 45
2.6.2 Optimization algorithms ..................................................................................... 47
2.6.2.1 Genetic algorithm (GA) .................................................................................. 48
2.7 Computer Aided Tolerancing (CAT) tools .................................................................... 49
2.8 Process Integration and Design Optimization (PIDO) .................................................. 52
2.9 Tolerance analysis and synthesis of assemblies subject to loads ................................ 54
2.10 Summary of outcomes and opportunities for further work ........................................ 58
3 Development of enhanced PIDO methods for design analysis and refinement ........... 61
3.1 Chapter summary ......................................................................................................... 61
3.2 Introduction ................................................................................................................. 61
3.2.1 PIDO tools ........................................................................................................... 67
3.2.2 Accommodating manufacturing variation in conceptual and embodiment
design .................................................................................................................. 67
3.2.3 Assembly complexity .......................................................................................... 68
3.2.4 Key Product Characteristics (KPCs) ..................................................................... 68
3.2.5 Assembly response function modelling .............................................................. 70
3.2.6 CAD tools ............................................................................................................ 70
3.2.7 Uncertainty quantification strategy ................................................................... 71
3.3 Visualization method for the identification of KPCs based on sensitivity analysis ..... 72
3.3.1 Potential limitations ........................................................................................... 74
3.3.2 Case Study 3.1 – Visualization method for identification of KPCs in a conceptual
embodiment design of an automotive actuator assembly ................................ 75
3.3.2.1 Process data ................................................................................................... 75
3.3.2.2 PIDO integration ............................................................................................ 76
3.3.2.3 Results ............................................................................................................ 78
3.3.3 Discussion of results ........................................................................................... 81
3.4 Computationally efficient manufacturing sensitivity analysis for assemblies with
linear‐compliant elements ........................................................................................... 82
3.4.1 Manufacturing sensitivity analysis of automotive seat rail assemblies ............. 83
3.4.2 Variation in coefficient of rolling resistance ....................................................... 86
3.4.3 Variation in rolling element contact force ......................................................... 86
ix
3.4.3.1 Linear‐compliant rail representation ............................................................. 87
3.4.3.2 FE contact force model .................................................................................. 88
3.4.4 Variation in rolling element clearance ............................................................... 90
3.4.5 Assumptions ....................................................................................................... 92
3.4.6 Results ................................................................................................................. 92
3.4.6.1 Rail assembly A .............................................................................................. 92
3.4.6.2 Rail assembly B ............................................................................................... 93
3.4.6.3 Rail assemblies C, D and E .............................................................................. 94
3.4.7 Benchmarking of designs .................................................................................... 95
3.4.8 Discussion of results ........................................................................................... 97
3.5 Refinement of concept design embodiments through PIDO based DOE analysis and
optimization ................................................................................................................. 99
3.5.1 Automotive seat kinematics ............................................................................... 99
3.5.2 PIDO based DOE analysis and optimization of the conceptual design of
automotive seat kinematics ............................................................................. 101
3.5.2.1 Results .......................................................................................................... 103
3.5.3 Other applications ............................................................................................ 105
3.5.4 Discussion of results ......................................................................................... 106
3.6 Summary of research outcomes ................................................................................ 108
4 Novel approach for PIDO based tolerance analysis of assemblies subject to loading 111
4.1 Chapter summary ....................................................................................................... 111
4.2 Introduction ............................................................................................................... 111
4.3 Effects of loads in tolerance analysis ......................................................................... 113
4.4 PIDO based tolerance analysis platform .................................................................... 115
4.4.1 Platform flowchart ............................................................................................ 116
4.4.2 Parametric CAD model ..................................................................................... 119
4.4.3 Physical model simulation ................................................................................ 120
4.4.4 Uncertainty quantification strategy ................................................................. 121
4.4.5 Variation database ............................................................................................ 121
4.4.6 Yield estimation ................................................................................................ 122
4.5 Case study 4.1 ‐ Assembly design subject to external forces ................................... 123
4.5.1 Problem definition ............................................................................................ 123
4.5.2 Sources of variation .......................................................................................... 124
4.5.2.1 Variation in injection moulding ................................................................... 124
4.5.2.2 Variation in spring wire ................................................................................ 125
4.5.3 Variation data used in simulation ..................................................................... 125
4.5.4 Simulation model .............................................................................................. 126
4.5.5 Simulation results ............................................................................................. 128
4.5.6 Outcomes .......................................................................................................... 131
4.5.7 Potential sources of error ................................................................................. 131
x
4.6 Case study 4.2 ‐ Assembly design subject to both external and internal forces ....... 133
4.6.1 Problem definition ............................................................................................ 133
4.6.2 Sources of variation .......................................................................................... 134
4.6.3 Variation data used in simulation ..................................................................... 134
4.6.4 Simulation model .............................................................................................. 135
4.6.5 Simulation results and outcomes ..................................................................... 137
4.6.5.1 Initial simulation .......................................................................................... 137
4.6.5.2 Second simulation ........................................................................................ 138
4.7 Summary of research outcomes ................................................................................ 140
5 PIDO based tolerance synthesis in assemblies subject to loading using polynomial
chaos expansion ............................................................................................................. 143
5.1 Chapter summary ....................................................................................................... 143
5.2 Introduction ............................................................................................................... 144
5.3 Tolerance synthesis .................................................................................................... 145
5.4 PIDO based tolerance synthesis ................................................................................. 146
5.5 Quantification of quality and cost.............................................................................. 148
5.5.1 Cost‐tolerance modelling ................................................................................. 149
5.5.2 Quality loss and process capability ................................................................... 150
5.6 Yield estimation by uncertainty quantification ......................................................... 151
5.6.1 Sampling based UQ methods ........................................................................... 152
5.6.2 Analytical UQ methods ..................................................................................... 152
5.7 Polynomial Chaos Expansion (PCE) ............................................................................ 153
5.7.1 Unidimensional Polynomial Chaos Expansion – Derivation of moment
expressions ....................................................................................................... 154
5.7.2 Multidimensional PCE ....................................................................................... 158
5.7.3 Higher order moments ..................................................................................... 159
5.7.4 Non‐normal distributions and correlated variables ......................................... 160
5.7.5 Methods for calculating PCE coefficients ........................................................ 161
5.7.5.1 Collocation ................................................................................................... 161
5.7.5.2 Stochastic projection ................................................................................... 163
5.7.5.3 Complete product grid quadrature.............................................................. 164
5.7.5.4 Sparse grid quadrature ................................................................................ 166
5.7.5.5 Anisotropic sparse grids and adaptive PCE .................................................. 170
5.7.6 Recommendations for calculating PCE coefficients ......................................... 171
5.7.7 PCE error estimates .......................................................................................... 172
5.8 Case study 5.1 ............................................................................................................ 173
5.8.1 Problem definition ............................................................................................ 173
5.8.2 Variation in rail geometry and tolerance costs ................................................ 176
5.8.3 Simulation models ............................................................................................ 180
5.8.4 UQ strategy ....................................................................................................... 183
xi
5.8.5 Optimization strategy ....................................................................................... 184
5.8.6 Assumptions ..................................................................................................... 184
5.8.7 Simulation results and outcomes ..................................................................... 185
5.9 Case study 5.2 ............................................................................................................ 188
5.9.1 Problem definition ............................................................................................ 188
5.9.2 Simulation model and optimization ................................................................. 190
5.9.3 UQ strategy ....................................................................................................... 191
5.9.4 Optimization strategy ....................................................................................... 192
5.9.5 Simulation results and outcomes ..................................................................... 192
5.10 Summary of research outcomes ................................................................................ 195
6 Conclusion ................................................................................................................ 197
6.1 Chapter Summary ...................................................................................................... 197
6.2 Contributions ............................................................................................................. 198
The contributions of this work are presented below according to the associated research
theme. ........................................................................................................................ 198
6.2.1 Design analysis and refinement accommodating uncertainty in early design
(Chapter 3) ........................................................................................................ 198
6.2.2 Tolerancing of assemblies subject to loading (Chapter 4) ............................... 203
6.2.3 Efficient uncertainty quantification in tolerance analysis and synthesis (Chapter
5) ....................................................................................................................... 206
6.3 Future work ................................................................................................................ 210
Appendices ..................................................................................................................... 213
A. Tolerancing schemes ................................................................................................... 214
A.1 Dimensional tolerancing ......................................................................................... 214
A.2 Geometric Dimensioning and Tolerancing (GD&T) ................................................. 214
A.3 Vectorial tolerancing ............................................................................................... 218
B. Process capability ........................................................................................................ 220
B.1 Process capability index – Cp ................................................................................... 220
B.2 Process capability index – Cpk .................................................................................. 220
B.3 Process capability index – Cpm ................................................................................. 221
B.4 Process capability indices – Non‐normal distributions ........................................... 222
References...................................................................................................................... 223
xii
LIST OF FIGURES
Figure 1.1 – Thesis map ............................................................................................................. 9
Figure 2.1 – Literature review topic outline ............................................................................ 16
Figure 2.2 ‐ Classification of costs associated with poor quality (Feigenbaum 2012) ............. 23
Figure 2.3 – Relationship between quality control and cost (Juran 1992) .............................. 24
Figure 2.4 ‐ Normal distribution and confidence intervals. ..................................................... 28
Figure 2.5 ‐ Simple mechanical assembly example with all parameters X1 to X5 subject to a
dimensional tolerance of +/‐ 0.1mm .................................................................... 31
Figure 2.6 ‐ Changing assembly contact conditions due to manufacturing variation of parts.
CAD systems can be limited in ability to automatically modify part mating
conditions to reflect certain realistic part contacts within an assembly. ............. 33
Figure 2.7 ‐ Normal distribution with LHC sampling strata of equal probability (N=8) ........... 39
Figure 3.1 ‐ Steps of planning and design process. Reproduced from Pahl and Beitz 2007
(Pahl et al. 2007). Contributions of this chapter are identified in red. ................. 63
Figure 3.2 ‐ (i) Design flexibility and knowledge versus project timeline (ii) Cost commitment
and accruement during phases of the design process, after (Ullman 2003). ....... 64
Figure 3.3 ‐ Visualization approach for the identification of KPCs within the native CAD
design environment using PIDO tools. .................................................................. 73
Figure 3.4 ‐ Histogram of measured production component used to establish PCIs .............. 76
Figure 3.5 ‐ Parametric CAD assembly model of concept actuator design ............................. 77
Figure 3.6 ‐ PIDO workflow for visualization methodology for identification of KPCs ‐
Actuator assembly. ............................................................................................... 77
Figure 3.7 ‐ Frequency of interference between assembly parts ............................................ 78
Figure 3.8 ‐ Part parameter sensitivity to interference ........................................................... 79
Figure 3.9 ‐ Assembly regions identified as being prone to unwanted part interference. KPCs
were defined to avoid the identified interference scenarios. .............................. 80
Figure 3.10 ‐ (i) Automotive seat (ii) seat rail assembly (iii) ‐ (vii) Alternative rail assembly
section views. ........................................................................................................ 85
Figure 3.11 ‐ Linear compliant rail simplification. ................................................................... 87
Figure 3.12 ‐ FE model details (rail assembly B). All dimensions in mm. ................................ 89
Figure 3.13 ‐ Rail deflection due to interference fit of rolling element (rail assembly B).
Contact area shown in detail. ............................................................................... 90
Figure 3.14 ‐ PIDO workflow associated with seat rail benchmarking study. ......................... 91
Figure 3.15 ‐ Rail assembly A rolling element clearance distribution. (i) Upper ball (ii) Lower
ball. ........................................................................................................................ 93
xiii
Figure 3.16 ‐ Rail assembly A rolling element specification limits. Shaded profile corresponds
to nominal rail dimensions. Upper Ball (UB), Lower Ball (LB)............................... 93
Figure 3.17 ‐ Rail assembly B rolling element clearance distributions. (i) Left roller (ii)
Bottom roller (iii) Right ball. .................................................................................. 94
Figure 3.18 ‐ Rail assembly B rolling element specification limits. Shaded profile corresponds
to nominal rail dimensions. Left Roller (LR), Bottom Roller (BR), Right Ball (RB). 94
Figure 3.19 ‐ Rolling element contact force versus local rail displacement. Gradient indicates
stiffness. ................................................................................................................ 95
Figure 3.20 ‐ Magnitude of variation in nominal rolling element clearance versus rail
assembly design. ................................................................................................... 96
Figure 3.21 ‐ Four‐bar linkage and associated nomenclature (Leary et al. 2011). ................ 101
Figure 3.22 ‐ PIDO workflow associated with the second phase of design refinement and
optimization of automotive seat kinematic concept designs. ............................ 102
Figure 3.23 ‐ Four‐dimensional chart indicating performance of Pareto‐optimal solutions in
the conceptual design of automotive seat kinematics. Designs referred to in the
discussion have been labelled with associated identification numbers. ........... 104
Figure 3.24 ‐ (i) Quasi‐static model of users arm and wheelchair wheel interaction (Leary et
al. 2012) ............................................................................................................... 106
Figure 4.1 ‐ General tolerance analysis of a mechanical assembly. Stages are identified as per
Section 4.2. .......................................................................................................... 113
Figure 4.2 ‐ PIDO based tolerance analysis platform ............................................................. 118
Figure 4.3 ‐ Spring and spigot assembly. ............................................................................... 123
Figure 4.4 ‐ Spring spigot assembly and FE model of spring. ................................................ 127
Figure 4.5 ‐ PIDO tolerance analysis workflow for Case study 4.1. ....................................... 128
Figure 4.6 ‐ Histogram of clearance measurements for spigot outer diameter (ODmeasure).
(Note: Solid line indicates estimated population distribution based on sample
results. The initial analysis provided a yield of approximately 96.8 % for the
spring outside diameter.) .................................................................................... 129
Figure 4.7 ‐ Histogram of clearance measurements for spigot inner diameter (IDmeasure).
(Note: Solid line indicates estimated population distribution based on sample
results. The initial analysis provided a yield of approximately 97.1 % for the
spring inside diameter.) ...................................................................................... 129
Figure 4.8 ‐ Student chart of IDmeasure .................................................................................... 130
Figure 4.9 ‐ Student chart of ODmeasure .............................................................................. 130
Figure 4.10 ‐ Histogram of clearance measurement error .................................................... 132
Figure 4.11 ‐ Comparison of original and meshed spring geometry. Light shade indicates a
difference of mesh geometry from original by 50.00 x 10‐3 mm (smallest
tolerance used in simulation) ............................................................................. 132
xiv
Figure 4.12 ‐ Rotary switch and spring loaded radial detent assembly model used in Case
study 4.2 .............................................................................................................. 134
Figure 4.13 ‐ Transient resistive torque for 1000 assembly variants resulting from initial
simulation ............................................................................................................ 136
Figure 4.14 ‐ PIDO tolerance analysis workflow for Case study 4.2. ..................................... 137
Figure 4.15 ‐ Histogram of peak resistive torques obtained from initial simulation ............ 138
Figure 4.16 ‐ Student chart of peak resistive torque for initial simulation ........................... 138
Figure 4.17 ‐ Histogram of peak resistive torques for second simulation. ............................ 139
Figure 4.18 ‐ Student chart of peak resistive torque for second simulation ......................... 139
Figure 5.1 ‐ PIDO based tolerance synthesis platform. Extension of the PIDO based tolerance
analysis platform presented in Section 4.4. (Figure 4.2). ................................... 148
Figure 5.2 ‐ (i) Exponential cost‐tolerance relationship, (ii) Chain of cost‐tolerance curves for
multiple manufacturing processes of varying precision (V=3). .......................... 150
Figure 5.3 ‐ Multidimensional full product and sparse grid Gauss‐Hermite Quadrature with
level 2 for 2 dimensions and 2 1 1 growth rule. ................................ 169
Figure 5.4 ‐ (i) Automotive seat and rail assembly (black) (ii) seat rail assembly section view
including die‐press folding sequence for upper and lower rails......................... 174
Figure 5.5 ‐ Measured rail assembly (i) CMM mounting jig and sample rails under
measurement (ii) general jig dimensions (ii) section measurement locations (iii)
section view including folding sequence for upper and lower rails (iv) sample
upper rail variation (v) sample lower rail variation. ........................................... 178
Figure 5.6 ‐ Influence of additional samples to the change in overall standard deviation for a
total of 24 measured rail sets. ............................................................................ 179
Figure 5.7 ‐ Cost‐tolerance curves for rail bend angles for varying levels of variation control
difficulty. The process curves are plotted only within the feasible limits of the
associated process. ............................................................................................. 179
Figure 5.8 ‐ Cost‐tolerance curves for rail radii angles for varying levels of variation control
difficulty. The process curves are plotted only within the feasible limits of the
associated processes. .......................................................................................... 180
Figure 5.9 ‐ (i) Rail section parameters (alpha numeric label designates stochastic variable –
see Table 5.8) ...................................................................................................... 181
Figure 5.10 ‐ PIDO tolerance synthesis workflow for Case study 5.1. ................................... 183
Figure 5.11 ‐ Objectives space of tolerance synthesis for Case study 5.1. ............................ 186
Figure 5.12 ‐ Rotary switch and spring loaded radial detent assembly model used in Case
study 5.2 .............................................................................................................. 189
Figure 5.13 ‐ Cost‐tolerance curves for part parameters of radial detent assembly ............ 190
Figure 5.14 ‐ PIDO tolerance synthesis workflow for Case study 5.2. ................................... 191
xv
Figure 5.15 ‐ Objectives space of tolerance synthesis for Case study 5.2. ............................ 194
Figure A.1 ‐ GD&T tolerance control frame ........................................................................... 217
Figure A.2 ‐ Traditional dimensional tolerancing (i). Traditional tolerancing involves datum
ambiguity in manufactured parts (ii). GD&T tolerancing including alphabetical
datums precedence specification (iii) eliminates ambiguity. ............................ 218
Figure A.3 ‐ Process output distributions with decreasing standard deviation. Cp increasing
from left to right with decreasing standard deviation. ...................................... 220
Figure A.4 ‐ Process output distributions of equal standard deviation and increasing
centring. Cp is equal for all distributions. Cpk increases from left to right with
increasing centring. ............................................................................................. 221
Figure A.5 ‐ Process distributions with non‐symmetric specification limits. Cpm increasing
from left to right: ................................................................................................ 221
xvi
LIST OF TABLES
Table 2.1 ‐ Proposed cost‐tolerance functions (Wu et al. 1988; Dong et al. 1994). ............... 24
Table 2.2 ‐ Comparison of various Tolerance Analysis methods ............................................. 30
Table 2.3 ‐ Tolerance chart for simple assembly example in Figure 2.5. ................................ 31
Table 2.4 ‐ Classification of optimization algorithms .............................................................. 47
Table 2.5 ‐ Comparison of commercial CAT tools. Limitations in current CAT tools are
identified in bold. .................................................................................................. 51
Table 3.1 ‐ Process capability data of measured component (Figure 3.4). Results based on
combined measurements across all locations and from all moulding cavities. ... 76
Table 3.2 ‐ Rail assembly designs considered in benchmarking analysis. ............................... 86
Table 3.3 ‐ Rail section parameter variation specified by industry partner and used in
statistical tolerance analysis ................................................................................. 91
Table 3.4 ‐ Ball dimensions used for contact force simulation in rail assembly A .................. 92
Table 3.5 ‐ Ball dimensions used for contact force simulation in rail assembly B ................... 94
Table 3.6 ‐ Performance ranking of conceptual rail assembly designs. .................................. 97
Table 3.7 ‐ Classification of four‐bar mechanisms. ................................................................ 101
Table 3.8 ‐ Model input parameters. ..................................................................................... 103
Table 3.9 ‐ Model output parameters, dimension and objective. ......................................... 103
Table 3.10 ‐ Benchmarking results against other competing products ................................. 104
Table 4.1 ‐ Process capability data of measured component in Test Case 4.1. .................... 126
Table 4.2 ‐ Spigot and spring assembly parameters and associated variation. .................... 126
Table 4.3 ‐ Initial and required nominal spigot wall dimensions based on simulated clearance
measurements. ................................................................................................... 131
Table 4.4 ‐ Case study 4.2 rotary switch assembly parameters and associated variation. ... 135
Table 5.1 ‐ Generalized polynomial chaos expansion (gPCE) basis and weighting functions for
various parameter distributions (Xiu et al. 2003; Eldred et al. 2008) ................ 160
Table 5.2 ‐ Minimum number of simulations N required for point collocation based PCE with
various expansion order k, and dimensionality d. Oversampling ratio s = 2 (as
recommended in (Hosder et al. 2007)). .............................................................. 162
Table 5.3 ‐ Monomials for 2 complete product grid with excess monomials highlighted
............................................................................................................................. 167
Table 5.4 ‐ Number of points required for isotropic sparse grids and full product grids based
on Gauss‐Hermite quadrature rules with growth rule 2 1 1, for
multiple dimensions and grid levels. Precision indicates the maximum
xvii
polynomial degree which can be exactly represented by the associated
quadrature. ......................................................................................................... 170
Table 5.5 ‐ Case study 5.1 objectives and constraints. .......................................................... 176
Table 5.6 ‐ Standard deviation in measured rail folds. Classification of the level of difficulty in
controlling associated variation for both the case study rail (Figure 5.4 (ii)) and
the measured rail (Figure 5.5 (iii)) ...................................................................... 177
Table 5.7 ‐ Combined averaged standard deviation associated with low and high difficulty
folds for measured rail. ....................................................................................... 177
Table 5.8 ‐ Rail assembly parameters and associated variation for initial design and selected
optimum. ............................................................................................................. 187
Table 5.9 ‐ Case study 5.2 objectives and constraints ........................................................... 189
Table 5.10 ‐ Case study 5.2 assembly parameters, associated variation and tolerance
synthesis outcomes ............................................................................................. 193
Table A.1 ‐ GD&T variation types and standardised symbols (after ANSI Y14.5). ................. 216
xviii
NOMENCLATURE
Term Definition
BR Bottom Roller
CAD Computer Aided Design
CAE Computer Aided Engineering
CAT Computer Aided Tolerancing
CFD Computational Fluid Dynamics
CMM Coordinate measurement machine
CPU Central Processing Unit
DOE Design of experiments
DoF Degree of freedom
FE Finite Element
FEA Finite Element Analysis
FEM Finite Element Modelling
GD&T Geometric Dimensioning and Tolerancing
gPCE generalized Polynomial Chaos Expansion
KPC Key Product Characteristic. A parameter of relevance to functionality.
LB Lower Ball
LHC Latin Hypercube
LMC Least Material Condition.
LR Left Roller
LSL Lower Specification Limit. The minimum limit of a parameter or KPC.
MC Monte Carlo
MDO Multi‐disciplinary Design Optimization
MMC Maximum Material Condition.
MOGA Multi‐Objective Genetic Algorithm
Parameter Any variable of a part or an assembly
PC Point Collocation
PCE Polynomial Chaos Expansion
PCI Process Capability Index.
PIDO Process Integration and Design Optimization
QLF Quality loss function
Quality The degree to which a manufactured assembly fulfils specified KPCs
RB Right Ball
RSM Response Surface Modelling
RSS Root Sum of Squares
SFE Stochastic Finite Element
SFEM Stochastic Finite Element Modelling/Model
SG Sparse Grid
Specification limits Acceptable limits of a parameter or KPC
TTRS Technologically and Topologically Related Surfaces
UB Upper Ball
UQ Uncertainty Quantification
USL Upper Specification Limit. The minimum limit of a parameter or KPC.
Yield Percentage of assemblies which conform to the KPC specification limits
xix
LIST OF SYMBOLS
Parameter Description
Standard deviation
Mean
γ Skewness
β Kurtosis
Difference between estimates of the mean
Difference between estimates of the standard deviation
δ Influence of additional samples to the change in sample
Number of terms, samples, or integration points
Linear constants
Dimension
Dimension index 1…
Response function
Part or assembly parameter
Sampling or integration point
The expected value or population mean
probability density function
〈 〉 The nth statistical raw moment (i.e. moment about zero)
Summation or product index
Summation or product index
Polynomial of degree i
Polynomial basis coefficients
Maximum order of the polynomial
The Hermite polynomial series
Collocation point
Point collocation oversampling factor
number of collocation points
The total number of terms in a complete product grid
The total number of terms in a sparse grid
Δ The sum of the squares of the difference between the expansion of
order and the response function
Precision level in each dimension 1…
interpolatory quadrature rule for variable 1…
Quadrature rule weight
Level of unidimensional quadrature rules with associated for
variable 1 … .
Sparse grid level
, A sparse grid of level (for 0) and dimension
Vector of unidimensional quadrature rule levels in each dimension
1 … .
Product level
⨂ Tensor product
Growth rule associated with a quadrature rule
Indicator of the smoothness of an integrand
xx
Parameter Description
Process capability index which measures the potential of a process to
produce outputs within the specification limits
Process capability index which measure the ability of a process to
produce an output that is centred and within the specification limits
Process capability index which measure the ability of a process to
produce an output that is at an arbitrary target and within the
specification limits
Process capability index measuring the performance of processes with
non‐normal distributions
Target nominal value
The median of a distribution
Number of possible interactions in an assembly
Minimum threshold tolerance cost
Quality loss function weighting constant
Minimum threshold tolerance of cost‐tolerance curve
Cost‐tolerance curve fitting parameter derived from experimental data
Cost‐tolerance curve fitting parameter derived from experimental data
Minimum economically feasible tolerance
Maximum economically feasible tolerance
Tolerance value of cost‐tolerance curve
Tolerance cost for a specific component
Total tolerance cost of an assembly
Minimum bore wall thickness
Clearance between the internal diameter of the spring and the spigot
wall
Clearance between the outer diameter of the spring and the spigot
wall
Outside diameter of pocket
Inside diameter of pocket
Diameter of spring wire
Mean diameter of spring
H Spring height
P Spring pitch
E Young’s modulus of spring
Rswitch Switch radius
Rball Ball radius
Α Angle of ramp face
Θ Yaw angle of ramp face
F Spring preload
K Spring rate
µswitch Switch‐detent dynamic friction coefficient
µslider Slider‐detent dynamic friction coefficient
Shortest link in a four‐bar linkage
Longest link in a four‐bar linkage
, Intermediate length links in a four‐bar linkage
xxi
1 INTRODUCTION
1.1 Background and industry collaboration
This research program was conducted in collaboration with tier‐one automotive component
manufacturers, namely SMR Automotive Australia and Futuris Automotive Interiors, with
federal government support through the Australian Cooperative Research Centre for
Advanced Automotive Technology (AutoCRC). The research program was initiated to
investigate technological solutions for maximising the functionality of automotive seating
and automotive actuator assemblies. Extensive consultation with the industry, and rigorous
review of existing literature identified deficiencies in the understanding and management of
the effects of manufacturing variation on the functionality of complex mechanical
assemblies. Addressing these deficiencies provides a significant opportunity for improving
manufacturing efficiency, product reliability and quality. Specific opportunities for novel
research were subsequently identified and addressed in this dissertation.
1.2 Introduction and motivation
In the process of engineering design, the cost of implementing changes to design decisions
typically increases non‐linearly as the design progresses. This non‐linear cost increase is
attributed to progressively accumulating commitments to high‐capital resources, such as
detailed design efforts, production plans, tooling, prototyping and testing (Pahl et al. 2007).
Consequently, a high proportion of the cost of engineering and delivering a product can be
attributed to the decisions made during the early stages of design. In response to these
observed relationships, engineering design philosophy advises that available design and
analysis resources should be focused towards the early stages of the project; this maximises
knowledge of the design problem, and the expected performance of any design
embodiments, while the flexibility to enact changes is high, and accrued costs are low (Pugh
1991; Baumgartner 1995).
Design analysis and refinement techniques provide an efficient means of increasing
knowledge of the design problem, and expected performance, early in the project timeline.
Such design analysis and refinement techniques include: sensitivity analysis; Design Of
Experiments (DOE) studies; tolerance/robustness analysis; and, optimization methods.
1
These design analysis and refinement techniques allow for: effective selection and
refinement of design embodiments; evaluation against technical and economic criteria;
identification of errors; and, evaluation of cost effectiveness. This capability allows the
designer to make informed decisions regarding design improvement while there is sufficient
design flexibility to act without undue expense.
Despite the high benefit of analysis and refinement early in the design process, in practice it
may be difficult to achieve as many influential technical and economic properties remain
undefined or uncertain. This design uncertainty results in a broad design space in which
feasible regions and optimum performance can be difficult to identify efficiently without
imposing significant design analysis expense. Research efforts into effective ways of
addressing design uncertainty are ongoing (Smith et al. 1997; Thompson et al. 1999;
Krishnan et al. 2001; Tomiyama et al. 2009; Ebro et al. 2012).
Stochastic manufacturing variation is a particularly significant source of uncertainty as it has
a high influence on the ability of manufactured products to achieve specified performance
requirements. Stochastic manufacturing variation effects in mechanical assemblies are
managed by the specification of tolerances. Tolerances denote the permissible variation of
parameters such that required assembly functionality is achieved. Identification of the
effects of parameter variation on assembly functionality, and the specification of optimal
tolerances, are referred to as tolerance analysis and tolerance synthesis, respectively. These
are challenging problems involving the competing objectives of manufacturing cost and
product quality, as well as constraints imposed by product and manufacturing process
requirements (Shah et al. 2007).
Addressing the effects of manufacturing variation in early design can reduce the costs of
managing poor quality later in the manufacturing stage when the ability to enact change is
limited (Bergman et al. 2009; Ebro et al. 2012). In particular, design analysis and refinement
based on tolerance analysis of concept design embodiments can provide insight into the
sensitivity of alternative concepts to manufacturing variation, and facilitate concept
selection with quantitative measures of robustness. This early increase of design knowledge
allows the designer to make informed decisions addressing the effects of manufacturing
variation, while there is sufficient design flexibility to act without much expense. Estimating
the effects of variation on assembly functionality is typically achieved with tolerance
analysis based on a computational assembly tolerance model (Chase 1988). However,
traditional statistical tolerance analysis methods typically require a large number of
evaluations of the assembly tolerance model which imposes high computational costs.
2
These computational costs are difficult to accommodate early in design where analysis
budgets for individual concept designs are limited (Chase 1988; Soderberg et al. 1999; Pahl
et al. 2007; Shah et al. 2007; Singh et al. 2009).
Tolerance analysis requires the specification of assembly parameters which indicate
whether a given manufactured assembly will conform to the intended functional
requirements of the design. The parameters of particular relevance to functionality are
referred to as Key Product Characteristics (KPCs), and are typically geometric, such as
clearances or nominal dimensions. However, the complexity which arises in product
assemblies with many interacting parts and features can make the identification of assembly
KPCs a challenging task for the designer due to difficulty in visualizing and understanding
variation effects in complex assemblies. Currently there is a lack of an accessible and
efficient approach which can aid in the identification of KPCs within the native CAD
environment, without imposing significant additional modelling and expertise demands
(Thornton 1999; Dahl et al. 2001; Zhou et al. 2008).
The functionality of mechanical assemblies is often defined in terms of the minimum or
maximum allowable proximity between geometric features. However in many mechanical
assemblies, functionality is also dependent on loading, including: external or internal forces,
temperature changes or electromagnetic interaction. These loads influence assembly
functionality through effects such as compliance, dynamics and mechanical wear (Shigley et
al. 2004; Lovasz 2012). These effects are particularly relevant in, for example: mechanical
actuators, automotive seat positioning mechanisms, sheet metal assemblies (such as
automotive or aerospace body panels), bolted connections subject to fastener and hole
alignment tolerances, and assemblies subject to welding induced thermo‐mechanical
distortion. The ability to accommodate the effects of loading in tolerance analysis allows for
a more realistic prediction of assembly behaviour in the presence of manufacturing
variation. Analytical and numerical methods have been proposed for addressing tolerance
analysis and synthesis problems in complex mechanical assemblies. However, current
approaches are limited in their ability to comprehensively accommodate tolerance analysis
problems in which assembly functionality is dependent on the effects of loading. Limitations
include (Liu et al. 1997; Merkley 1998; Shiu et al. 2003; Lee et al. 2009; Pierre et al. 2009):
Reliance on specific, custom simulation codes with limited implementation in practical
and accessible tools, as well as the need for significant additional expertise in
formulating specific assembly tolerance models and interpreting results.
3
Accommodation of only specific loading scenarios (such as sheet metal compliance or
welding‐distortion).
Additionally, Computer Aided Tolerancing (CAT) software tools have been developed which
offer practical tolerance analysis and synthesis capabilities. However, current commercial
CAT tools generally lack the ability to accommodate tolerance analysis of assemblies whose
functionality is dependent on various loading effects (Salomons et al. 1998; Prisco et al.
2002; Chiesi et al. 2003; Zhengshu 2003; Shen 2005; Shah et al. 2007). As such, there is
currently a lack of an accessible approach to tolerance analysis of assemblies subject to
loading, which integrates into an established CAD/E design frameworks.
Manual iteration of tolerance analysis can be ineffective at identifying tolerances which
achieve optimum manufacturing cost and yield targets. Tolerance synthesis improves the
ability to identify optimal tolerances by utilising optimization algorithms. However, the
ability of available CAT tools to effectively address sophisticated optimization problems
(such as tolerance synthesis in complex assemblies) with many competing objectives and
constraints is relatively limited compared to dedicated optimization tools. Furthermore,
tolerance synthesis requires that tolerance analysis be iterated, thereby compounding
computational costs, especially when numerical modelling of the effects of loading on
mechanical assemblies is required. The cost of such tolerance synthesis is often seen as
computationally impractical and not warranting the associated benefits (Hong et al. 2002). A
major reason for the high computational cost is associated with the estimation of yield in
tolerance analysis based on Uncertainty Quantification (UQ) methods. The traditional
approach to UQ in statistical tolerance analysis is reliant on sampling based UQ methods
such as Monte Carlo (MC) sampling. MC sampling is typically applied due to its inherent
robustness and broad applicability. However MC sampling has poor efficiency and requires a
large number of model evaluations for accurate results, which can impose high
computational costs with demanding models (Nigam et al. 1995).
The observations summarised above identify a number of active areas of research, and
limitation in existing knowledge, which can be categorically grouped according to three
research themes. A summary of the identified limitations associated with each theme is
presented below:
4
Theme 1: Design analysis and refinement accommodating uncertainty in early design stages
Identifying Key Product Characteristics (KPCs) is challenging due to difficulty in
visualizing and understanding variation effects in complex assemblies without imposing
significant additional modelling and expertise demands.
Traditional statistical tolerance analysis imposes high computational costs, which are
difficult to accommodate early in design where analysis budgets are limited for
individual concept designs.
Early design stages are often associated with a vast design space in which feasible
regions and optimum performance are difficult to identify.
Theme 2: Tolerancing of assemblies subject to loading
There is a lack of an accessible approach for tolerance analysis of assemblies subject to
general loading effects, which integrates into an established CAD/E design frameworks.
Current Computer Aided Tolerancing (CAT) systems are limited in their ability to
accommodate a general class of tolerance analysis and synthesis problems requiring
simulation of the effects of loading on mechanical assemblies.
Compared to dedicated optimization software, CAT tools have a narrower ability to
address sophisticated tolerance synthesis optimization problems.
Theme 3: Efficient uncertainty quantification in tolerance analysis and synthesis
The cost of tolerance analysis, and in particular tolerance synthesis, involving demanding
assembly models (particularly assemblies under loading) can often be computationally
impractical. The high computational cost is mainly associated with traditional statistical
tolerancing Uncertainty Quantification (UQ) methods reliant on low‐efficiency Monte
Carlo (MC) sampling.
The identified limitations and gaps in domain knowledge provide the motivation for the
research undertaken in this dissertation. The associated research objectives and scope are
presented in the following section.
1.3 Research scope and objectives
The motivation discussed in the previous section identified a number of active areas of
research and limitations in domain knowledge associated with three research themes.
Specific opportunities for addressing the identified limitations define the research scope and
objectives of this dissertation and are outlined below under each associated research
5
theme. Additionally, a thesis map is provided in Figure 1.1 which identifies the research
scope and objectives as well connections between associated research topics.
Theme 1: Design analysis and refinement accommodating uncertainty in early design
To accommodate limited design analysis budgets, design analysis and refinement
techniques need to offer rapid implementation, low analysis cost, as well as reliable
outcomes. Furthermore, to accommodate multidisciplinarity in design modelling,
integration with disparate CAD/E tools is required for effective design analysis and
refinement; this is an active research focus of Multi‐disciplinary Design Optimization (MDO)
engineering methods (Sobieszczanski‐Sobieski et al. 1997). MDO has resulted in the
emergence of a range of Process Integration and Design Optimization (PIDO) software tools
capable of facilitating interdisciplinary CAD/E tool integration to enable: automated
parametric studies, DOE, statistical analysis, and, multi‐objective optimization (Kodiyalam
1998; Malone et al. 1999; Padula et al. 1999; Simpson et al. 2008; Flager et al. 2009; Adams
2011).
The emerging design analysis and refinement capabilities of PIDO tools offer novel
opportunities for addressing the identified limitations associated with design analysis and
refinement accommodating uncertainty in early design stages (Section 1.2).
This research aims to develop novel PIDO tool based methods for managing parameter
uncertainty in early stages of design, based on design analysis and refinement techniques
such as: sensitivity analysis, tolerance/robustness analysis, DOE and optimization methods.
The associated research objectives include:
Development of a visualization method for aiding designers in identifying assembly KPCs
within native CAD models. The CAD/E tool integration capabilities of PIDO tools offer an
opportunity to enable visualization of assembly behaviour under expected
manufacturing variation within native CAD models, which are often readily available at
the concept embodiment design stage. This capability may thereby aid in the
identification of KPCs with low additional modelling effort requirements.
Development of methods for computationally efficient manufacturing sensitivity analysis
in early design stages. To reduce modelling expense, the method should enable the
reuse of CAD/E models created as part of the standard design process.
Development of method for rapidly identifying optimal regions in the early design space
with PIDO based DOE analysis and optimization and CAE tool integration, and efficient
reuse of design models.
6
These expected outcomes of these objectives are contributions which can be directly
applied to improve the design of mechanical assemblies involving uncertainty or variation in
design parameters, in the early stages of design.
Theme 2: Tolerancing of assemblies subject to loading
The ability of PIDO tools to facilitate CAD/E tool integration, automated parametric studies,
DOE and statistical analysis, allows novel opportunities for resolving tolerance analysis
problems requiring numerical modelling of the effects of loading on mechanical assemblies.
The research objective associated with this theme is the development of a computationally
efficient, PIDO based approach for tolerance analysis of assemblies subject to loading,
within the modelling environment of existing standalone CAD/E tools. Specific objectives
include:
Use of CAD/E models created as part of the standard design process for parametric
CAD/E based tolerance analysis to reduce the need for additional modelling tools and
expertise.
Modelling of the effects of a broad range of loading scenarios on mechanical assemblies
with the use of dedicated tools such as FE modellers.
Validation with practical, industry relevant tolerance analysis case studies.
The expected outcome is an approach which extends the capabilities of traditional CAT tools
by enabling tolerance analysis of assemblies subject to general range of loading effects.
Theme 3: Efficient uncertainty quantification in tolerance analysis and synthesis
A variety of alternative analytical UQ techniques have recently been proposed which, under
certain conditions, can offer significantly higher efficiency than methods such as MC
sampling (Lee et al. 2009). These analytical techniques have the potential to significantly
improve the practical feasibility of tolerance analysis and synthesis, particularly involving
assemblies under loading. A particularly efficient and attractive analytical UQ method is
Polynomial Chaos Expansion (PCE), however, its applicability and effectiveness in tolerance
analysis and synthesis is not fully known (Xiu et al. 2003).
The objectives associated with this research theme are:
Assessment of whether alternative UQ methods such as PCE are appropriate for the
specific requirements identified in this work, in association with tolerance analysis of
assemblies subject to the effects of loading. This requires an analysis of the PCE method
identifying working principles, implementation requirements, advantages and
limitations.
7
Based on the outcome of the assessment, establishing of recommendations for
appropriate implementation of PCE in tolerance analysis.
Potential implementation of PCE into a PIDO based tolerance analysis and synthesis
framework.
Integration of the advanced optimization capabilities of PIDO tools into a tolerance
synthesis framework.
Evaluation of PCE on practical, industry relevant tolerance analysis and synthesis case
studies, and validation against reference results obtained using traditional methods.
In summary, this research is focused on enhancing the engineering design of mechanical
assemblies involving uncertainty or variation in design parameters, with emerging design
analysis and refinement capabilities of PIDO tools. The main research objective is the
development of a novel, computationally efficient, PIDO based approach for tolerance
analysis and synthesis of assemblies subject to loading, within the modelling environment of
existing standalone CAD/E tools. The projected outcome is the successful application of the
research outcomes to industry relevant design problems involving complex mechanical
assemblies subject to the effects of manufacturing variation.
8
9
Figure 1.1 – Thesis map 29
1.4 Research questions
This dissertation is guided by the following key research questions:
1. How can PIDO tools address design problems that involve uncertainty or variation in
design parameters?
2. Can the capabilities of PIDO tools be utilised to conduct effective tolerance analysis of
assemblies under loading within the modelling environment of existing CAD and CAE
software?
3. How can the high computational cost of statistical tolerance analysis and synthesis be
reduced, particularly in assemblies under loading?
4. Can analytical UQ methods such as Polynomial Chaos Expansion (PCE) be effectively
applied to tolerance analysis and synthesis to improve efficiency without compromising
accuracy?
The knowledge base developed by addressing the outlined research questions, will establish
the feasibility of effective practical tolerance analysis and synthesis of assemblies subject to
loading within a PIDO framework.
1.5 Methodology
The formulated research questions provide a roadmap for successfully addressing the
objectives of this research.
This is assisted by the following methodology which identifies milestones required for
achieving the research objectives:
1. Review the existing body of knowledge concerning uncertainty in the design of
mechanical assemblies; including methods and tools addressing tolerance analysis and
synthesis.
2. Investigation of the potential of design analysis and refinement capabilities of PIDO tools
to enhance the early stages of design problems involving uncertainty or variation in
design parameters.
3. Development of a PIDO based tolerance analysis approach for the general tolerance
analysis of assemblies subject to loads within the modelling environment of existing
standalone CAD/E tools.
4. Extension of the PIDO based tolerance analysis approach to allow tolerance synthesis
guided by multi‐objective optimization.
10
5. Evaluation of the hypothesis that high‐efficiency analytical UQ methods can be utilized
in tolerance analysis.
6. Integration of analytical UQ methods with PIDO based tolerance analysis and synthesis,
pending the successful outcomes of preceding research methodology objectives.
7. Evaluation of the effectiveness of each stage of the methodology with practical industry
based design problems.
Additionally, a summary overview of the thesis themes, objectives, contributions, and topic
interconnections is presented in the thesis map shown in Figure 1.1. The thesis map outlines
chapter topics and general methodology of this dissertation.
This dissertation is organized such that the methodology milestones are addressed
sequentially. Validating case studies are presented immediately following the discussion of
any developed technique or method in order to demonstrate and highlight any particular
associated benefits.
11
1.6 Key outcomes and contributions:
This research program has resulted in the contributions categorically listed below in
summary form according to the relevant research theme. Significant additional detail of key
outcomes and contributions is presented in Chapter 6.
Theme 1: Design analysis and refinement accommodating uncertainty in early design stages
A Process Integration and Design Optimization (PIDO) based visualization method to aid
designers in identifying assembly KPCs at the concept embodiment design stage (Section
3.3) (Mazur et al. 2010). The method integrates the functionality of commercial CAD
software with the process integration, UQ, data logging and statistical analysis
capabilities of PIDO tools, to simulate manufacturing variation effects on the part
parameters of an assembly and visualise assembly clearances, contacts or interferences.
The proposed method has been validated using an industrial case study by enabling the
automated identification of unintended component interactions, in the concept design
embodiment of an automotive mirror actuator assembly.
Computationally efficient manufacturing sensitivity analysis for assemblies with linear‐
compliant elements (Section 3.4) (Mazur et al. 2011). An efficient method for analysing
the effects of manufacturing variation in linear‐compliant assemblies under loading was
developed. The method significantly reduces computational costs by utilising linear‐
compliant assembly stiffness measures, reuse of CAD models created in the conceptual
and design embodiment stage, and PIDO tool based statistical tolerance analysis. This
method was developed as part of a benchmarking study of alternative automotive seat
rail assembly concept embodiments to quantify their sensitivity to manufacturing
variation. The benchmarking study identified significant differences in sensitivity to
manufacturing variation between alternative designs. This outcome allowed the
designers to proceed into the detail design stage with higher certainty of performance
and with low additional analysis expense.
Refinement of concept design embodiments through PIDO based DOE analysis and
optimization (Section 3.5) (Leary, Mazur et al. 2010; Leary, Mazur et al. 2011). This
contribution highlights the benefit of exploring the conceptual and embodiment design
space through DOE analysis and optimization. This contribution was validated with a
case study addressing the conceptual design of automotive seat kinematics consisting of
a four‐bar linkage system, which, despite its apparent simplicity, is associated with a
large design space. An identified Pareto‐optimal concept was selected for detail design
and manufacture. The selected design was found to offer the best performance in
12
achieving a vertical seat travel objective with the least number of manual actuations
(these are actuations required to lift the seat for a given fixed lift effort). This superior
performance against competitors in seat actuation demands was a determining factor
for the selection of the design in the seat assembly of the Tesla Motors Model S full‐
sized electric sedan currently on sale in the United States.
Theme 2: Tolerancing of assemblies subject to loading
A novel tolerance analysis platform which integrates CAD/E and statistical analysis tools
using PIDO tool capabilities to facilitate tolerance analysis of assemblies subject to loading
(Chapter 4) (Mazur et al. 2011). Integration was achieved by developing script based links
between standalone CAD/E software, through commonly embedded scripting capabilities,
and the process integration facilities of PIDO tools. The platform addresses the limitations of
CAT tools and offers an accessible tolerance analysis approach with low implementation
demands due to integration with the established CAD/E modelling design framework.
The capabilities of the platform were validated with two industry relevant tolerance analysis
case studies involving assemblies subject to loading. These include: an automotive actuator
assembly consisting of a rigid spigot and complaint spring undergoing compression due to
external loading; and an automotive rotary switch in which a resistive actuation torque is
provided by a spring loaded radial detent acting on the perimeter of the switch body. In
both case studies the developed platform was successfully applied to identify the tolerances
required to achieve the required assembly yield.
Theme 3: Efficient uncertainty quantification in tolerance analysis and synthesis
A novel PIDO based tolerance synthesis in assemblies subject to loading using Polynomial
Chaos Expansion (PCE). Chapter 5 established that PCE based UQ is feasible in tolerance
analysis and can enable significant reduction in computational costs. The resulting
computational efficiency enabled the PIDO based tolerance analysis platform developed in
Chapter 4 to be further extended to allow multi‐objective, tolerance synthesis in assemblies
subject to loading. The resultant PIDO based tolerance synthesis platform integrates: highly
efficient sparse grid based PCE UQ; parametric CAD and FE models accommodating the
effects of loading; cost‐tolerance modelling; yield quantification with Process Capability
Indices (PCI); and, optimization of tolerance cost and yield with multi‐objective Genetic
Algorithm (GA). The tolerance synthesis platform can be applied to tolerance analysis and
synthesis with significantly reduced computation time while maintaining accuracy.
13
The effectiveness of PCE in tolerance analysis and synthesis was validated using two case
studies, these include: an automotive seat rail assembly subject to compliance due to
internal loading; and an automotive switch assembly subject to loading from a spring‐loaded
detent feature. In both case studies optimal tolerances were identified which satisfied yield
and tolerance cost objectives. The implementation of PCE in a PIDO based tolerance
synthesis platform resulted in large computational cost reductions without compromising
the accuracy achieved with traditional MC methods.
1.7 Thesis Outline
This dissertation is organised into 6 main Chapters. Chapter 1 introduces the research
project and summarises the objectives, methodology, and key outcomes. Chapter 2
presents a comprehensive review of literature and technology relevant to this research.
Specific opportunities for novel research outcomes are identified.
Chapter 3 identifies opportunities to enhance the conceptual and embodiment stages of
design involving uncertainty or variation in design parameters. This is achieved by
developing novel methods for the use of PIDO tools in the analysis and refinement of
concept design embodiments with sensitivity analysis, tolerance analysis, DOE methods and
optimization. Practical conceptual and embodiment design problems are considered and
effective solutions developed for a number of industry focused scenarios.
Chapter 4 addresses the limitation of CAT tools to accommodate assemblies subject to
loading by developing a novel tolerance analysis platform which integrates CAD, CAE and
statistical analysis tools using PIDO software capabilities. To demonstrate the capabilities of
the developed platform, examples of practical, industry related tolerance analysis problems
involving compliance and multi‐body dynamics are presented.
Chapter 5 investigates in detail the integration of highly efficient Polynomial Chaos
Expansion (PCE) in tolerance analysis for uncertainty quantification. The PIDO tool based
tolerance analysis platform developed in Chapter 4 is subsequently extended to allow multi‐
objective, tolerance synthesis in assemblies subject to loading, with significantly reduced
computational cost. Industry based case studies are presented to demonstrate that the
application of PCE based UQ to tolerance analysis and synthesis can significantly reduce
computation time while maintaining accuracy.
The final chapter summarizes the outcomes and conclusions of this dissertation and offers
recommendations for potential areas warranting further research and development. An
appendix containing additional detail on background concepts and methods associated with
tolerancing practice concludes the thesis.
14
2 LITERATURE REVIEW
2.1 Chapter summary
This chapter presents a review of literature and technology relevant to the research
outcomes of this work. Specific opportunities for novel research contributions are identified,
and further developed in subsequent chapters.
The scope of the literature review is focused on work relating to uncertainty or variation in
design parameters, in particular the analysis and synthesis of tolerances in engineering
design. The topics considered in this chapter initially address background concepts and
methods associated with tolerancing practice as these are extensively utilized in this
research. Once the background context is established, the literature review progressively
focuses on more active research areas where limitations and gaps in domain knowledge are
identified.
Outcomes of the literature review identifying gaps in domain knowledge and limitation in
existing methods are presented in section 2.10. A number of associated research
opportunities are discussed.
15
Figure 2.1 – Literature review topic outline
Figure 2.1 categorically outlines the topics addressed in this chapter. The identified
fundamental areas of knowledge relevant to this research scope are summarised below,
where the relevance of each topic to the research scope is established before subsequent
discussion further in the body of this chapter.
16
Stochastic manufacturing systems (Section 2.2):
Management of the effects of manufacturing variation is inherently associated with the
concept of uncertainty. Section 2.2.1 considers the nature of uncertainty and the associated
aspects of relevance to this research. Uncertainty is linked to the concept of quality of
manufactured goods, as notions of high quality are typically synonymous with high certainty
in achieving the specified performance requirements. Section 2.2.2 formally defines the
concept of quality as used in this dissertation. Subsequently in Section 2.2.3, methods for
formally measuring costs associated with achieving a specified level of product quality are
discussed, as they are used in this work as metrics for tolerance analysis and synthesis.
Tolerance analysis (Section 2.3):
Tolerance analysis is the study of the effect of variation of a parameter on the variability in
the functionality of a product part or assembly. The parameters of interest in this work are
typically dimensional and geometric, and are defined according to conventions of
tolerancing schemes. Both Geometric Dimensioning and Tolerancing (GD&T) and
Dimensional tolerancing schemes are applied in this research. These are discussed in Section
2.3.2 and Appendix A.
In this research, tolerance analysis is conducted according to statistical tolerancing
principles (Section 2.3.1) which require consideration of the probabilistic likelihood that a
parameter subject to stochastic manufacturing variation will take on a given value. This
likelihood is characterized by the parameter’s probability distribution. Probability
distributions associated with manufacturing variation relevant to this research are discussed
in Section 2.3.3. Quantifying the distribution of a product parameter in terms of the
required specification limits is achieved with Process Capability Indices (PCI). PCIs of
relevance to this work are defined in Section 2.3.4 and Appendix B and are adopted in this
research to provide a measure of manufacturing yield (which is the number of
manufactured products which meet specification requirements).
Tolerance modelling (Section 2.4):
Tolerance modelling aims to identify the relationship between the tolerance information
associated with part features and assembly functionality. Various approaches to tolerance
modelling have been developed which offer different levels of sophistication and capability.
These methods are discussed and evaluated in Section 2.4 against the research objectives of
this work, namely: suitability for CAD/E integration; and accommodation of assemblies
which are subject to loading.
17
Uncertainty quantification (Section 2.5):
Statistical tolerance analysis requires quantification of the expected manufacturing yield.
This can be estimated with Uncertainty Quantification (UQ) methods which characterize the
probabilistic response of a system dependent on stochastic variables. UQ methods are
discussed in Section 2.5, where limitations of UQ methods traditionally used in statistical
tolerancing are highlighted, and recently developed alternative UQ methods with potentially
superior performance are identified. The suitability of the alternative UQ methods for
tolerance analysis is considered against the requirements imposed by the research
objectives of this work, such as: applicability to integration with existing CAD/E and PIDO
tools; high efficiency and accuracy; flexibility in accommodating various input parameter
distributions; and ability to accommodate high dimensionality problems efficiently.
Tolerance synthesis (Section 2.6):
Tolerance synthesis is the process of optimally allocating part tolerances in a product
assembly to maximize assembly yield (or product quality) and minimize tolerance cost. A
tolerance synthesis problem requires the integration of a number of analysis and simulation
techniques, which are addressed in the identified sections. These include: optimization
algorithms (Section 2.6.1); tolerance analysis (Section 2.3); associated UQ method (Section
2.5); and, yield and cost‐tolerance estimation approaches (Section 2.2.3.1). A number of
tolerance synthesis methods have been proposed with a range of different analysis
techniques; these are reviewed in Section 2.9. Limitations in existing methods are identified.
Optimization algorithms are compared to identify suitability for tolerance synthesis of
assemblies subject to loading, as per the research objectives of this work outlined in Section
1.3.
Computer Aided Tolerancing (CAT) tools (Section 2.7):
Computer Aided Tolerancing (CAT) software tools offer tolerance analysis and synthesis
capabilities either as independent software packages, or through integration with
commercial CAD systems. A review of existing CAT tool capabilities is presented in Section
2.7 and a number of limitations of relevance to this research are identified. The identified
limitations are a key motivation for this research, as discussed in Sections 1.3 and 2.10.
Process Integration and Design Optimization (PIDO) (Section 2.8):
Process Integration and Design Optimization (PIDO) tools are software frameworks for
facilitating the integration of diverse, discipline specific CAE analysis tools for process
scheduling, design of experiments, optimization and statistical analysis. A review of PIDO
tools is presented in Section 2.8. Novel PIDO tool based design analysis and refinement
18
opportunities are identified for enhancing the engineering design of mechanical assemblies
involving uncertainty, or variation in design parameters. These opportunities are exploited
in subsequent chapters of this research.
Tolerance analysis and synthesis of assemblies subject to loads (Section 2.9):
Accommodating the effects of loads in tolerance analysis and synthesis is an active research
field. A number of methods have been proposed which differ in: the addressed loading
effect; the modelling approach; computational expense; and any associated simplifying
approximations. Previous contributions to this field are discussed in Section 2.9. Limitations
in existing methods are identified and research opportunities formulated.
2.2 Stochastic manufacturing systems
Manufacturing processes are intrinsically subject to stochastic variation. Variation can occur
within geometric parameters such as dimensions of a part or assembly feature, within
material properties such as yield or fatigue strength, or within other product characteristics.
These variations result in uncertainty in the performance of manufactured goods and
require effective management to ensure correct performance. Strategies for management
of manufacturing variation have an extensive development history. Prior to mass
production, manufacturing was mainly carried out by craftsmen, where the management of
variation of the entire product was controlled by the workmanship of the individual (Bijker
1997). Advancements in mass production, such as increased production volumes and
efficiency, as well as increased process complexity, were achieved by skill specialisation and
automation of labour. Consequently, management of variation was no longer determined
by an individual, but was dependent on the workmanship of multiple individuals and the
precision of varied machinery. A need arose for a systematic approach to the management
of variation in mass manufactured goods, in particular standardised methods for
dimensional control and tolerancing (Mitra 1998). Consequently an extensive body of
concepts, standards and methods have been developed for effectively managing
manufacturing variation. Topics relevant to this dissertation (as outlined in Section 2.1 and
Figure 2.1) are discussed in subsequent sections.
2.2.1 Uncertainty
Manufacturing variation is inherently associated with the concept of uncertainty.
Uncertainties can be classified as either systematic or stochastic (Kiureghian et al. 2009;
Terejanu et al. 2010).
19
Systematic Result from insufficient data or knowledge about a physical system which
uncertainties: can be known in principle, but is unknown in practice. Also referred to as
epistemic uncertainties.
Stochastic Result from inherent randomness in the behaviour of a physical system.
uncertainties: Also referred to as aleatory uncertainties.
Systematic uncertainties may exist due to factors such as insufficient measurement
accuracy, human errors, or modelling simplifications. They can be considered as
uncertainties associated with the assessment of a system. These uncertainties can be
reduced by collecting additional data, increased measurement precision, or increased
understanding of the problem during modelling. Conversely, stochastic uncertainties are
nondeterministic and not reduced by the possession of more data or knowledge about the
system.
Accommodating uncertainties is the focus of reliability‐based design and robust design
approaches (Taguchi 1993; Bergman et al. 2009; Stapelberg 2009; Pascoe 2011). In general,
a robust and reliable system is considered resilient in response to uncertainty. However
there is no universal agreement on a distinct definition of the two approaches in the
literature, both in terms of meaning and quantifiable criteria. General definitions proposed
and adopted in this work are:
Robustness: The ability of a system to satisfy functional requirements in the presence of
varying parameters, inputs or environmental conditions. The degree of
robustness can be measured by the range of variation under which the
system satisfies functional requirements.
Reliability: The ability of a system to perform its requested functions under stated
conditions whenever required. The degree of reliability can be measured by
the probability of the system failing to perform its function.
Tolerance analysis (introduced in Sections 2.3), which is the focus of this work, can be
classified as robust design problems due to the associated intrinsic objective of meeting
product functional requirements in the presence of stochastic manufacturing variation.
Tolerance analysis involves both systematic and stochastic uncertainties. Uncertainty in a
particular part parameter, such as the stiffness of a coil spring for instance, may be either
systematic or stochastic depending on the circumstance. For example, if the desired
stiffness value is for a specific spring, then the uncertainty in its stiffness will be systematic if
the specific spring is tested to quantify its stiffness. Uncertainty in the stiffness value will be
subject to the limitation of the measurement equipment and procedures used for
assessment, which are reducible with more rigorous testing. However, if the desired
20
stiffness value is for a batch of springs, uncertainty in the spring parameters will be subject
to inherent stochastic variation associated with the manufacturing process and cannot be
suppressed by more accurate measurements of the springs. Estimating the influence of
uncertainty in tolerance analysis is the objective of Uncertainty Quantification (UQ)
methods; these from a critical element of this dissertation and are discussed further in
Section 2.5.
Uncertainty is linked to the concept of quality of manufactured goods, as notions of high
quality are typically synonymous with high certainty in target performance.
2.2.2 Quality
The term quality is often associated with the effect of manufacturing variation on the
performance of manufactured goods. A number of definitions of quality have been
presented in the literature:
"The ability of the product to fulfil its intended requirements for technical performance,
customer satisfaction, and manufacturing efficiency." (Juran 1992)
"Uniformity around a target value." (Taguchi 1993)
"The loss a product imposes on society after it is shipped." (Ealey 1988)
“Freedom from deficiencies.” (Juran 1992)
"Degree to which a set of inherent characteristics fulfils requirements" where
requirement are subsequently defined as “needs or expectations” (ISO 2005)
"The totality of features and characteristics of a product or service that bears its ability
to satisfy stated or implied needs." (ISO 8402‐1986)
The varied definitions are all founded on the same fundamental concepts of quality being a
measure of the performance of a product in being free from defects, deficiencies, and
significant performance variations. The definition of quality adopted in the specific context
of this dissertation is:
Quality: The degree to which a manufactured product achieves target values for
parameters of particular importance to the functionality and performance of
the product. These parameters of particular importance are referred to as Key
Product Characteristics (KPCs).
This concept of quality will be adopted in this dissertation to establish metrics for measuring
the expected performance of mechanical assemblies subject to manufacturing variation;
these are discussed in the following sections.
21
2.2.3 Quality loss and cost‐tolerance relationships
The costs associated with the quality of a product can be broadly attributed to quality
control (i.e. achieving defined quality targets) and quality loss (i.e. a failure to control
quality) (Phadke 1989; Taguchi 1989; Feigenbaum 2012). The costs associated with quality
control can be attributed to efforts directed at the detection of defects (appraisal costs) and
efforts directed at the prevention of defects (prevention costs). The costs associated with
quality loss can be attributed to: internal failure costs (the scrap or repair costs associated
with defects identified by the manufacturer) and external failure costs (the cost of an
unidentified product defect reaching the customer). Figure 2.2 classifies the quality control
and quality loss costs according to their associated sources (Feigenbaum 2012).
22
Cost of product
quality
Costs of quality control Costs of quality loss
(conformance) (non‐conformance)
Figure 2.2 ‐ Classification of costs associated with poor quality (Feigenbaum 2012)
The costs associated with quality control and quality loss compete with each other. A
product with a high level of defects, deficiencies, and significant performance variations will
typically have a low cost of quality control and a high cost of quality loss due to broad
variability in key product characteristics. Conversely, a product with a low level of defects
will incur a high cost of quality control yet a low quality loss as higher precision typically
requires more extensive manufacturing efforts, which in turn, translates into higher
production costs. Achieving an efficient balance between quality control and quality loss
costs (Figure 2.3) is necessary for mass manufactured goods to be economically competitive
(Juran 1992).
23
Figure 2.3 – Relationship between quality control and cost (Juran 1992)
In the scope of this work, the costs of controlling quality are attributed to the cost of
conforming to specified tolerances, and are modelled using cost‐tolerance functions
(Section 2.2.3.10). Similarly, the costs of failure to control can be attributed to the number
of manufactured assemblies not conforming to specification; these are represented by
quality loss functions and process capability indices (Sections 2.2.3.2).
2.2.3.1 Cost‐Tolerance models
Manufacturing with reduced variation typically requires added manufacturing effort such as:
more precise machinery, increased number of manufacturing steps, higher uniformity in
material properties and stricter process control (Feigenbaum 2012). This additional
manufacturing effort translates to increased cost. Various functions have been proposed for
representing the cost‐tolerance (or cost to accuracy) relationship for a range of
manufacturing processes. These relationships are summarised in Table 2.1.
Table 2.1 ‐ Proposed cost‐tolerance functions (Wu et al. 1988; Dong et al. 1994).
Cost‐Tolerance Function Reference
(Zhang et al. 1993; Dong et
Exponential
al. 1994)
Exponential/Reciprocal power (Michael 1981)
Linear (Edel 1964)
Piecewise Linear (Patel 1980)
Reciprocal (Chase 1988)
Reciprocal Power (Sutherland 1975)
Reciprocal squared (Spotts 1973)
Although the broad applicability of cost‐tolerance functions can be limited by difficulties in
finding realistic cost models applicable to specific manufacturing scenarios (Hong et al.
2002), the cost‐tolerance functions serve as a generally reasonable representation of the
cost penalty associated with increased manufacturing precision and will be applied in this
work to model tolerance cost.
24
2.2.3.2 Cost of quality loss
The cost incurred by a loss of product quality in a specific KPC can be represented by a
quality loss function (QLF) (Taguchi 1989; Cho et al. 1997) (Equation (2.1)).
Where µ and σ2 are the mean and variance, respectively, of a KPC (2.1)
with target value τ. is weighting constant.
Originally developed by Taguchi (Taguchi 1989), the QLF is based on the theory that any
product manufactured outside of the nominal specification will ultimately result in a loss for
the manufacturer, customer or society in general on account of effects such as (Taguchi
1989):
premature wear
reduced reliability
increased warranty costs
reduced brand reputation
rework costs
waste due to increased scrap
The quality loss function accrues a loss (representing an associated cost) with any deviation
of the relevant KPC from its target value. Both the variance and the mean offset from the
target value contribute to a loss in quality. Consequently, the greater the deviation from the
nominal value, the higher the loss in quality.
The quality loss function offers a reasonable estimate of customer satisfaction in
circumstances where a direct empirical relationship between perceived product quality and
the cost associated with customer dissatisfaction is unknown to the manufacturer (Jeang
1999). The QLF has been demonstrated to lend itself well to addressing tolerance analysis
and synthesis problems (Jeang 1999; Cho et al. 2000; Choi et al. 2000; Feng et al. 2001).
Similar metrics to the QLF are Process Capability Indices (PCI) which measures the
consistency and accuracy of manufacturing process outputs. PCIs are adopted in this work
and are discussed in further detail in Section 2.3.4.
2.3 Tolerance analysis
Tolerance analysis is the study of the effect of variation of a parameter on the variability in
the functionality of a product part or assembly. The parameters studied are typically
dimensional and geometric, and are defined according to conventions of tolerancing
schemes (Section 2.3.2).
25
Parameters which are of particular relevance to functionality are referred to as Key Product
Characteristics (KPCs) (Lee et al. 1996; Zheng et al. 2008). The KPCs are defined in terms of
the parameters in an assembly by an assembly response function. An assembly response
function is defined according to methods of tolerance modelling (Section 2.4). The response
function may either be explicitly defined by an algebraic expression or may be implicitly
captured in a numeric form such as a CAD assembly or CAE model.
The Upper Specification Limit (USL) and Lower Specification Limit (LSL) are applied to
parameters and define the acceptable limits of variation. An assembly with KPCs that all lie
within defined specification limits is said to satisfy all functional requirements.
Manufacturing yield is defined as the percentage of assemblies that conform to the
specification limits of all KPCs. Yield is calculated according to worst‐case or statistical
tolerancing principles (Section 2.3.1); in the latter case, through the use of Uncertainty
Quantification (UQ) methods (Section 2.5). These topics are categorically reviewed in the
following sections. A summary of the key tolerance analysis nomenclature which will be
applied in this work is defined below.
Upper Specification Limit Define the acceptable limits of variation in a parameter.
(USL) and Lower
Specification Limit (LSL):
Yield: The percentage of assemblies which conform to the specification
limits of all KPCs.
Uncertainty The process of determining the probabilistic effects of input
Quantification (UQ) uncertainties on response metrics of interest in stochastic
systems.
26
2.3.1 Worst‐case and statistical tolerancing
Tolerance analysis can be conducted according to two different yield requirements:
that all manufactured products must satisfy all functional requirements (worst‐case
tolerancing), or
a small percentage of manufactured products are permitted to violate functional
requirements (statistical tolerancing)
Worst‐case tolerancing aims to satisfy the unlikely probability that all parameters within an
assembly are concurrently at the unfavourable extremes of their expected distributions.
Perfect conformability (100% product yield) is guaranteed, however the approach may
require exceptionally tight tolerances resulting in uneconomical production costs.
Statistical tolerancing allows for non‐perfect conformability (less than 100% yield), allowing
the associated tolerances to be relaxed to enable reduction in manufacturing costs. Due to
being more economical in terms of manufacturing costs, statistical tolerancing is typically
applied for mass production (Hong et al. 2002). Statistical tolerancing is the main focus of
this research.
2.3.2 Tolerancing schemes
A tolerancing scheme is a method of defining allowable limits of variation in part or
assembly parameters. Two tolerancing schemes are in use; dimensional tolerancing, and
Geometric Dimensioning and Tolerancing (GD&T).
Dimensional tolerancing specifies the acceptable size of a part feature and any associated fit
between features of mating parts. The size of a feature is specified by a nominal (basic) size,
and an associated Lower Specification Limit (LSL) and Upper Specification Limit (USL).
Dimensional tolerancing is typically limited to linear dimensions of a part feature. This can
often restrict the ability to represent possible types of variation (Wade 1967).
27
2.3.3 Manufacturing variation distributions
The stochastic nature of manufacturing results in process outputs that deviate from the
intended nominal values. Statistical tolerance analysis often assumes that the variation is
distributed according to the normal (Gaussian) distribution (Figure 2.4) (Oakland 2007).
The assumption of a normal distribution can often be valid as such distributions are
commonly encountered in manufacturing processes. Furthermore, due to the central limit
theorem, normal distributions often arise in mechanical assemblies. The central limit
theorem states that when populations of means are created from sample data sets of size ,
drawn from one underlying parent population, then irrespective of the shape of the
distribution of the parent population:
The mean of the population of means will equal the mean of the parent population from
which the population samples were drawn.
The standard deviation of the population of means will equal the standard deviation of
the parent population divided by the square root of the sample data sets of size .
The population of means will exhibit a normal distribution with increasing sample size.
If a measured variable is a combination of several other uncorrelated variables, all of
them subjected to a variation of any distribution, then the measured variable will be
subjected to a variation that is normally distributed as the number of combinatorial
variables increases.
As such, an assembly parameter that depends on the combination of various part features
all subject to some uncorrelated variation, will tend to show a normal distribution with an
increasing number of contributing part features regardless of their distribution type
(Johnson et al. 2004; Miller et al. 2010).
In some applications the assumption of a normal distribution may not be valid. For instance,
processes with a hard limit will typically show a distribution skewed away from normal and
28
truncated near a limiting value (Wright 1995). One such example is a hole‐drilling process
where the distribution is truncated at the drill size diameter. A strategy for handling non‐
normal data with analysis tools founded on an assumption of normally distributed variables
is to transform the data with transformation techniques such as Nataf or Box‐Cox (Box et al.
1964; Armen Der Kiureghian et al. 1986; McRae et al. 1995). For instance, a variable with a
lognormal distribution can be transformed into a normal by taking its natural logarithm.
Similarly, taking the cubed root of a variable with a gamma distribution will transform it to
an approximately normal distribution. Approaches for the accommodation of non‐normal
distributions in tolerancing have been comprehensively documented in the literature
(Nigam et al. 1995; Zhang et al. 1999; Hyun Seok et al. 2002; Kharoufeh et al. 2002).
In this work, parameters with normally distributions are commonly applied. Where non‐
normal distributions require consideration, distribution transformation techniques are
considered.
2.3.4 Process Capability Indices (PCI)
A manufacturing process output distribution can be quantified by four statistical moments:
the mean (μ), standard deviation (σ), skewness (γ) and kurtosis (β). Process Capability
Indices (PCI) have been developed to quantify these statistical attributes in terms of the
required specification limits (Pearn et al. 2006). PCIs are similar metrics to the quality loss
function (Section 2.2.3.2) as they measure the consistency and accuracy of manufacturing
process outputs. PCI compare the specification limits to the 6σ limits of the manufacturing
process distribution (i.e. 99.73% of the predicted population) and nominal target values
(Pearn WL 2006). A higher index indicates a more accurate process. Process Capability
measurements are subject to three fundamental assumptions (Montgomery 2001):
The sampled data is representative of the associated population.
The process is under statistical control – i.e. there are no assignable causes of variation
such as machine adjustment or defective materials.
The process is normally distributed (some PCIs can accommodate non‐normal
distributions, for example Section B.4).
Additional detail about specific process capability indices is provided in Appendix B. PCI
indices are an efficient way of measuring quality loss using a dimensionless metric while
giving a directed indication of the expected yield. In this dissertation PCIs are applied to
quantify the expected variation in stochastic parameters.
29
2.4 Tolerance modelling
Tolerance modelling involves the modelling of the relationship between the tolerance
information associated with part features and their effect on assembly functionality. The
relationship captured by the tolerance model is referred to as the assembly response
function.
Various approaches to tolerance modelling have been developed which offer different levels
of sophistication and capability. Tolerance modelling is typically achieved using Computer
Aided Tolerancing (CAT) tools. The most common types of tolerance modelling in use are:
Manual tolerance charts (Fortini 1967; Wade 1967)
Parametric CAT (Nigam et al. 1995; Prisco et al. 2002)
Abstracted Geometry CAT (e.g. vector‐loop) (Chase et al. 1995; Gao et al. 1998)
Multi‐variate regions (Roy et al. 1998; Houten et al. 1999; Mujezinovic et al. 2004)
These methods offer varying levels of implementation of the general tolerance analysis
process. With the advancement of CAD solid modelling effort have been made to develop
tolerance modelling methods which incorporate tolerance information as an intrinsic part of
the CAD design process. A number of different tolerance modelling approaches have been
suggested, however there has been little consensus towards a standard approach in this
area. The research efforts in this field are continuing (Hong et al. 2002). A comparison of
tolerance modelling approaches is shown in Table 2.2.
Table 2.2 ‐ Comparison of various Tolerance Analysis methods
GD&T External CEA
Tolerancing Yield
Method standard tool Automation
scheme requirement
support integration
Dimensional
Tolerance charts Complete Limited Worst‐case Manual
Geometric
Dimensional Worst‐case
Parametric CAT Partial Limited Automated
Geometric Statistical
Abstracted Dimensional Worst‐case
Partial Limited Automated
Geometry CAT Geometric Statistical
Multi‐variate Dimensional Worst‐case
Complete Limited Automated
regions Geometric Statistical
2.4.1 Manual tolerance charts
Manual tolerance charting is the most elementary system of tolerance analysis. The method
is also referred to as liner stack‐up analysis and is a manual worst‐case procedure based on
calculating the extreme values of assembly clearances or interferences of interest in an
assembly (these are typically KPCs). A reference coordinate system is established at the
extremity of the assembly and the analysed clearance or interference dimension (KPC) is
30
determined by an arithmetic sum of individual part feature sizes which constitute the
assembly. The tolerance chart is a table in which the arithmetic sum is carried out. As an
example, Figure 2.5 shows a simple assembly with two clearances as KPCs. The
corresponding tolerance chart is shown in Table 2.3. The chart contains the name of the
parameter; the corresponding upper or lower specification limits; the difference between
upper and lower specification limits; and the sign by which the parameters contribute to the
assembly KPC. The resultant value of an assembly KPC is shown in the bottom row where a
(+) indicates that the value results it a clearance or (‐) if the result is an interference.
Figure 2.5 ‐ Simple mechanical assembly example with all parameters X1 to X5 subject to a dimensional
tolerance of +/‐ 0.1mm
Table 2.3 ‐ Tolerance chart for simple assembly example in Figure 2.5.
All parameters X1 to X5 subject to a dimensional tolerance of +/‐ 0.1mm
KPC1 KPC2
Contributor Max Min Max Min
Difference Difference
USL/LSL Sign USL/LSL Sign USL/LSL Sign USL/LSL Sign
X1 40.1 + 39.9 + 0.2 40.1 + 39.9 + 0.2
X2 36.9 ‐ 37.1 ‐ 0.2
X3 12.9 ‐ 13.1 ‐ 0.2
X4 13.9 ‐ 14.1 ‐ 0.2
X5 10.9 ‐ 11.1 ‐ 0.2
SUM 3.2 + 2.8 + 0.4 2.4 + 1.6 + 0.8
31
2.4.2 Parametric CAD based CAT
In parametric CAD based Computer Aided Tolerancing (CAD), the constraint equations
inherent to CAD part and assembly models are used for tolerance modelling. The modelling
approach in typical CAD software is history based. Three‐dimensional solid geometric
models are created from two‐dimensional sketches which are subject to three‐dimensional
operations such as extrusions, sweeps and lofts. If any part dimensions need to be altered,
the model is reverted to the relevant point of change (such as a sketch), the relationships or
dimensions are updated, and subsequent operations reapplied in series (Shah et al. 1995;
Rao 2004).
CAD assemblies are defined by multiple CAD parts whose interaction is constrained to
restrict the degrees of freedom between parts. If any part geometry is modified, the
assembly constraints are re‐evaluated to rebuild the assembly model. Dimensions,
relationships and operations can be defined parametrically, providing a means of
implementing tolerance analysis by varying individual dimensions, either by the worst‐case
or statistical approaches (Section 2.5).
Parametric CAD based modelling can be applied to tolerance modelling and analysis with
the following methodology (Prisco et al. 2002):
1. Creating the nominal model topology of parts with two‐dimensional sketches and three‐
dimensional construction operations. The model topology needs to include all desired
geometric elements and their possible variants due to any associated tolerances.
2. Formulating parametric relationships which link the geometric elements of the part
model to parametric numerical variables.
3. Defining an assembly from individual part models with parametric interaction
relationships between part features. The interaction relationships constrain the degrees
of freedom between parts with contact, offset or alignment constraints. Any assembly
tolerances need to be accommodated with parametric interaction relationships which
allow for possible variants in the assembly configuration due to any associated
tolerances.
4. Applying nominal values to all parametric variables.
5. Instructing the CAD software modelling system to apply a general solution procedure to
the parametric part and assembly model equations resulting in an evaluated model in
which the defined relationships are satisfied.
32
6. Creating variants of the part and assembly models by changing values of the parametric
variables to reflect variation associated with applied tolerances and subsequently re‐
executing the model solution procedure.
The idealised constraint capabilities of CAD modelling provide an acceptable representation
of a product assembly in many cases. However, parameter combinations can exist that do
not readily represent realistic mating conditions for a given set of assembly constraints
(Shah et al. 2007). For example, features which nominally result in a clearance may interfere
due to geometric variation, for example Figure 2.6.
(i) Realistic part interaction
(ii) Part interaction as represented by CAD assembly constraints
Initially defined CAD Assembly contact constraint
Figure 2.6 ‐ Changing assembly contact conditions due to manufacturing variation of parts.
CAD systems can be limited in ability to automatically modify part mating conditions to reflect certain realistic
part contacts within an assembly.
To correctly model such a scenario in an automated manner, it is necessary to
accommodate intermittent part contact. Such capabilities are may not be readily available
within current CAD software in an automated manner. Alternative assembly constraints that
represent different contact scenarios may need to be defined and selectively enabled
through user input, depending on the KPC under analysis (Eiteljorg et al. 2003; Rao 2004;
Stroud et al. 2011). With such user input it is possible to avoid part interference and
unrealistic representation of the physical interaction such as that shown in Figure 2.6.
Additionally, the parametric CAD approach to tolerance modelling is subject to the following
limitations:
Statistical tolerance analysis may require the consideration of a large number of model
permutations. Due to their history based nature, parametric CAD models may be
computationally expensive for statistical tolerance analysis.
33
CAD assembly constraint systems are typically point‐to‐point constraints. This makes
them inconsistent with current tolerance standards which are based on tolerance zones,
not point‐to‐point variation.
Some forms of geometric variation defined in GD&T standards are not readily
accommodated using parametric CAD models and are still under development (Shen
2005).
Despite the identified limitations, the parametric CAD tolerance modelling has advantages
which make it an attractive form of tolerance modelling especially when considering
assemblies which are subject to loading. The advantages are discussed in Chapter 4.
Examples of parametric CAD tolerance modelling are provided in the case studies presented
in this work in Chapter 3, 4 and 5.
2.4.3 Abstracted geometry CAT and multi‐variate regions
In abstracted geometry CAT systems, an independent geometric model is developed to
mitigate the limitations of the parametric CAD approach; in particular the computational
expense of history‐based CAD model updates. The approach involves the user importing
the CAD model into the CAT system and interactively creating a geometry model
superimposed on the original CAD data. This abstracted model describes the possible part
variation, part mating relationships and resultant assembly response functions without the
model rebuild penalty associated with CAD model construction history , for example (Prisco
et al. 2002; Chiesi et al. 2003; Shen 2005).
One example of an abstracted geometry tolerancing modelling is referred to as
Technologically and Topologically Related Surfaces (TTRS) (Clément et al. 1991). TTRS is
based on the concept of pairs of surfaces, associated with a common solid, which are
functionally related in an assembly. Interactions between the surfaces of different solids are
used to model various tolerance types. The approach classifies the different possible
interactions with several distinct surface types and TTRS associations (relative position
possibilities of the surfaces). The tolerance zones created by the associated surfaces are
represented using torsors or matrices (Clement et al. 1993; Desrochers et al. 1994). A CAT
tool based on the TTRS method has been developed however its applicability is limited to
specific tolerance modelling scenarios (Section 2.7) (Salomons et al. 1995).
Vector‐loop tolerancing is another example of an abstracted geometry tolerancing
modelling method (Chase et al. 1995; Gao et al. 1998). The approach represents part
dimensions and clearances between part features within an assembly with vector notation.
34
Tolerances are represented by variation in the vector magnitude. Mating relationships
between parts are defined using relationships based on three‐dimensional kinematic joints.
These joints define the degrees of freedom associated with part interaction. Assembly
response functions are analytically described by considering the accumulation of kinematic
joint relationships between vectors ‐ these response functions are referred to as vector‐
loops. The advantages of this approach when compared with the parametric CAT tolerance
modelling are the reduced computation cost associated with: updating model geometry;
and, evaluation of the assembly response functions. The CAT software tool CETOL (Sigmetrix
2012) was originally based on the vector‐loop tolerancing model. However, the vector‐loop
method only permits one assembly clearance per vector‐loop and has problems in
accommodating over‐constrained assemblies. These limitations prevent the applicability of
the method in more complex tolerance modelling scenarios.
Other abstracted geometry based methods have been developed for CAT analysis with the
common objectives of reducing computational cost and extending support for GD&T
tolerancing schemes (Prisco et al. 2002). One approach presented in the literature is the
GapSpace tolerance modelling method (Zou et al. 2004). The method considers the
kinematics of assembly contacts in order to determine if parts will assemble without
interference. The claimed advantage, especially in comparison to parametric CAD based
tolerance modelling (Section 2.4.2), is the ability to accurately model intermittent part
interaction and offer realistic representation of the physical interaction between assembled
parts (Morse et al. 2005). However, efforts to implement the method into a software tool
which offers comprehensive compatibility with existing CAD modelling software are still
ongoing (You 2008).
Another alternative tolerance modelling approach which has been developed is the
Attribute Graph Model. The method is a graph‐based abstracted CAT approach based on a
separate consideration of linear and angular variation and classification of possible variation
with abstracted degrees of freedom between points, lines and planes (Shah et al. 1992). The
model has not seen much practical implementation and the authors have since focused on
the development of an alternative model based on hypothetical variation volumes.
The abstracted geometry methods mentioned so far are mostly point‐based in that the
variation in part geometry is accommodated only at specifically defined point locations. Due
to the point‐based nature, it is difficult to accommodate all possible variation aspects
defined in GD&T standards (Shah et al. 2007).
35
To address limitations of point‐based tolerance modelling, research has focused on
developing a comprehensive mathematical scheme of representing all possible forms of
variation defined in GD&T standards such as (ASME 2009) without the creation of a closed‐
form solution. The methods in this filed are generally classified as Multi‐Variate region
tolerance modelling. The most notable of these approaches is the T‐maps method which
utilizes hypothetical variation volumes of all possible locations and variations which can
arise in part subjected to GD&T tolerance types (Roy et al. 1998; Davidson et al. 2002; Shah
et al. 2007). Overlaying the T‐Maps of individual part allows for the assembly response
function to be defined. However, the method has currently not been implemented in a
practical, readily available CAT software tool.
Despite the many abstracted tolerance modelling methods which have been developed,
they share the following limitations:
Additional expertise, tools and time are required to create the abstracted geometry
model and interpret the result of analysis.
Current abstracted tolerance modelling methods are unable to integrate with external
CAE tools for tolerance modelling of assemblies subject to loading.
The CAE integration limitation is especially relevant to the objective of this research of
developing tolerance analysis and synthesis methods based within the modelling
environment of existing standalone CAD/E tools. This issue is discussed in further detail in
Chapter 4.
2.5 Uncertainty Quantification (UQ) methods
The two approaches to yield estimation in tolerance analysis are worst‐case and statistical
tolerancing. Statistical tolerancing is the approach typically applied for mass production as it
results in a more economical choice of tolerances (Section 2.3.1).
The key objective of statistical tolerance analysis is to estimate the assembly yield by
quantifying the expected number of manufactured assemblies that will satisfy all KPC
requirements for a given set of part and assembly tolerances. An estimate of yield requires
that the statistical moments of distributions of the assembly KPCs to be known. These can
be estimated by Uncertainty Quantification (UQ) methods.
Uncertainty quantification is the process of determining the probabilistic effects of input
uncertainties on response metrics of interest in stochastic systems. The objective of all UQ
methods is to characterize the probabilistic response of a system whose behaviour is
dependent on stochastic variables. The probabilistic systems response is characterized by its
36
probability density function (distribution) defined by the associated statistical moments: the
mean (μ), standard deviation (σ), skewness (γ) and kurtosis (β).
A number of UQ methods have been demonstrated in literature and can classified as either
sampling based or analytical (Nigam SD 1995; Lee SH 2009) as shown below:
Sampling‐based methods:
Monte Carlo simulation (Hammersley 1975; Rubinstein 1981; Nigam et al. 1995;
Skowronski et al. 1997; Cvetko et al. 1998)
Quasi‐Monte Carlo (Niederreiter 1992)
Latin Hypercube simulation (McKay et al. 1979; Keramat et al. 1997)
Analytical methods:
Root Sum of Squares (RSS) (Mansoor 1963)
Taguchi method (Taguchi 1978; D'Errico et al. 1988; Nigam et al. 1995)
Hasofer‐Lind index (Parkinson 1982; Lehtihet et al. 1991)
Extended Taylor series approximation (Evans 1975)
Most probable point (Fiessler et al. 1979)
First and second order reliability (Hohenbichler et al. 1987)
Full Factorial Numerical Integration (Hyun Seok et al. 2002)
Taylor series or perturbation (Ghanem et al. 2003)
Univariate dimension reduction (Rahman et al. 2004; Rahman et al. 2006)
Polynomial Chaos Expansion (PCE) (Schoutens 2000; Xiu 2003; Lovett et al. 2006)
The application of specific UQ methods depends on the intent and constraints of the
problem under consideration, including the:
Complexity of implementation (simulation based methods for instance are typically
trivial to implement in contrast to complex analytically methods such as PCE).
Computational cost and available computational budget.
Statistical moments of importance (typically in robust design problems such as tolerance
analysis, mean and standard deviation are of greater interest than higher order
moments).
Required statistical moment estimation accuracy (for low precision tolerances, higher
moment estimation error may be acceptable).
Input variable distribution (some UQ methods may not be compatible with non‐normal
input parameter distributions).
37
The model type to be accommodated; where the system response function of the model
is either available analytically (explicit) or defined in a numerical model (implicit).
The above issues are considered in the following sections.
2.5.1 Sampling based methods
Sampling based methods are broadly applicable, robust and easily implemented.
Furthermore their convergence (number of system evaluations required to achieve a given
error in estimating a desired system response) is typically independent of the dimensionality
of the problem and the smoothness of the system response function (Gerstner et al. 1998).
However, sampling methods typically show slow convergence and can be prohibitively
computationally expensive when each model evaluation involves lengthy numerical
simulations.
One method to address the computational expense associated with simulation methods is
by complementing the simulation with Response Surface Modelling (RSM). The real,
computationally expensive system model is evaluated a select number of times to map the
system response function. A computationally inexpensive surrogate model (meta‐model) is
subsequently fitted to the evaluated points using RSM techniques (Jeang 1999). A sampling‐
based UQ method is then applied to the meta‐model to obtain approximated estimates of
the statistical moments of the system response at a reduced computational cost. This
approach however is accompanied with the difficult problem of how to effectively include
the meta‐model fitting approximations into the statistical moment estimates derived from
sampling‐based UQ of the meta‐model. The result can be highly application dependent and
unpredictable requiring an empirical analysis of fitting errors, this limitation narrows the
applicability of RSM techniques.
2.5.1.1 Monte Carlo (MC) simulation
Monte Carlo simulation generates a probabilistic estimate of the system response by
aggregating the system outputs for a set of input variables randomly selected from their
associated probability distributions. The method is simple to implement, and robust against
different input parameter distributions. MC simulation statistical moment estimates
.
converge to the exact result at a rate of where is the number of simulations and
convergence is independent of the problem dimensionality and smoothness of the response
function (Frances et al. 2005). For example, to reduce the error in the moment estimate by
one order of magnitude requires 100 times more data. If greater accuracy is required with
38
additional samples, already evaluated samples can be reused. MC simulation typically
provides a performance baseline for assessment of other UQ methods.
In MC based sampling, the unconstrained random selection of variables can often lead to
poor coverage of the distribution of stochastic variables, especially at the tails of the
distribution, or when the number of samples is small. For example, achieving a 99%
probability of 1% error in sample variance from the desired value, 133,000 random samples
may be required (Huntington et al. 1998). Distribution tail statistics are especially important
for statistical tolerance analysis where interest often lies in estimating yield values near
100%.
A derivative technique is Quasi‐Monte Carlo sampling which aims to improve the
convergence of the MC method by selecting sampling points with more regular spacing
across the parameter space than that achieved with random sampling (Niederreiter 1992).
However the rate of improvement over MC can be small in practice (Morokoff et al. 1994).
2.5.1.2 Latin hypercube (LHC) simulation
Latin Hypercube (LHC) is a constrained sampling based UQ method in which the probability
distribution for stochastic variables is sampled in stratified sections of equal probability. The
equal probability strata are defined by the nature of the probability distribution; e.g. for a
normal distribution the strata width increases away from the mean due to the associated
reduction in probability density (Figure 2.7). For desired samples, the distribution is
sectioned into strata of equal probability, and a sample is randomly placed in each
individual strata. The resulting set of samples avoids clustering (undesirably close proximity
of sample points) which and ensures relatively uniform distribution over the probability
density function range (McKay et al. 1979; Keramat et al. 1997).
Figure 2.7 ‐ Normal distribution with LHC sampling strata of equal probability (N=8)
39
First and second order LHC statistical moment estimates converge to the ideal result
proportionally at a rate of where is the number of simulations. For example, to
reduce the error in the moment estimate by one order of magnitude requires ten times
more data. However, LHC sampling can result in poor estimates of higher order moments, as
the number of independent (uncorrelated) variables increases. Due to the LHC constrained
sampling scheme, correlations among variables can occur and high order moments and
output probability distribution estimates may not be accurate (Keramat et al. 1997;
Huntington et al. 1998). Due to these potential sources of error, the broad applicability of
LHC sampling can be limited.
2.5.2 Analytical methods ‐ Elementary
As sampling based‐methods such as MC are typically not founded on simplifying
assumptions associated with the UQ problem, they have the advantage of being able to
accommodate a broad range of UQ problems with low implementation demands. However,
this may also be a disadvantage when valid simplification for UQ problems are possible due
to, for instance, input parameters with a single distribution type, or smooth response
functions. Analytical UQ methods exploit the extra information available in such UQ
problems to significantly improve convergence efficiency.
The analytical UQ methods considered in this work have been categorised as either
elementary or advanced based on their associated efficiency, applicability to different
problems, simplifying assumptions and complexity of implementation. Elementary methods
typically have a comparatively low efficiency, are only compatible with normal parameter
distributions and may require the assembly response function to be analytically defined.
Advanced methods can accommodate any parameter distribution type and can be applied
to implicitly defined assembly response functions while offering higher efficiency.
Some elementary analytical methods widely associated with the field of tolerance analysis
include the Root Sum of Squares and Taguchi Methods.
2.5.2.1 Root Sum of Squares (RSS) method
The Root Sum of Squares (RSS) method allows for the analytical computation of the
statistical moments if the response function is analytically definable and linearly dependent
on the associated parameters i.e. it can be represented in the form of Equation (2.2).
40
, ,…., ⋯ Where: (2.2)
are parameters
are constants
For parameters
The RSS method is based on the assumption that are statistically independent and
normally distributed (Mansoor 1963). Under these assumptions it is possible to determine
the first two statistical moments of the system response from equations (2.3) and (2.4),
respectively:
⋯ (2.3)
⋯ (2.4)
To apply the RSS method to non‐linear response functions, a Taylor’s series expansion with
truncated high order terms may be applied to linearize the function (Evans 1975). However,
the method can be computationally expensive and requires that the response function be
explicitly known and differentiable. As the required assembly response functions in
tolerance analysis are often difficult or impractical to define analytically, the applicability of
the RSS method can be limited in CAD and CAT modelling environments involving complex
assemblies.
2.5.2.2 Taguchi method
The Taguchi method of UQ in tolerance analysis is based on Taguchi’s Design Of Experiments
(DOE) methodology (Taguchi 1978). The assembly response function is evaluated (whether
it be analytically defined or implicitly captured with a numerical model such a CAD
assembly) with a full factorial, three level experimental design. The Taguchi method is based
on the assumption that the assembly response function parameters are statistically
independent and normally distributed. The method achieves a computational saving by
effectively approximating the mean and standard deviation of the continuous normal
distribution of the part parameters with a discrete distribution defined by 3 points (Nigam et
al. 1995). The three levels for each part parameter of the assembly response function
correspond to: the parameter mean , a high levels of μ σ 3⁄2 and a low level of
μ ‐ σ 3⁄2. The response function is evaluated at 3 combination of the levels of each
parameter. The resultant is a population of KPC values from which statistical moments can
be calculated using standard statistical formulae. The method provides a good estimate of
the statistical moment and can also be modified to accommodate non‐normal distributions
(D'Errico et al. 1988). However, the number of sample points required grows exponentially
with the number of part parameters and can quickly become impractically. In such cases,
41
MC methods are a preferable choice as they are not founded on any simplifying
assumptions and may provide superior performance (D'Errico et al. 1988).
2.5.2.3 Other elementary analytical methods
Some additional UQ methods which can be classified as elementary include:
Hasofer‐Lind index method (Parkinson 1982; Lehtihet et al. 1991),
Extended Taylor series approximation (Evans 1975),
Most probable point method (Fiessler et al. 1979),
First and second order reliability (Hohenbichler et al. 1987).
A number of review articles focused on these methods have been presented (Evans 1975;
Parkinson 1982; Lehtihet et al. 1991; Lee et al. 2009). These elementary analytical UQ
methods will not be considered further in this dissertation as their applicability is limited
due to their simplifying assumptions (see Section 2.5.2) and as more contemporary
advanced analytical methods offer superior performance.
2.5.3 Analytical methods ‐ Advanced
Comparisons of various contemporary analytical UQ methods have been presented in the
literature (Haldar et al. 2000; Wojtkiewicz et al. 2001; Eldred et al. 2008; Eldred et al. 2009;
Lee et al. 2009). Based on the outcomes of these comparative studies, Polynomial Chaos
Expansion (PCE) offers the most potential for application in tolerance analysis and synthesis
problems due to:
Non‐intrusive1 nature applicable to integration with existing CAD, CAE and PIDO tools
High efficiency and accuracy
Flexibility in accommodating various input parameter distributions
Ability to accommodate high dimensionality problems
Current high interest in the research community resulting in continual performance
improvements of PCE methods
1
Non‐intrusive denotes the ability of the associated UQ method to estimate statistical moments without
requiring knowledge of the system function, only the system response for a set of known inputs. In the context
of tolerance analysis, a non‐intrusive UQ method can be used to estimate the stochastic distribution of
assembly Key Product Characteristics (KPCs) from a set of associated stochastic part or assembly parameter
values. The assembly response function does not need to be explicitly known in the form of an algebraic
expression, but can instead be simply captured as a “black‐box” form in a numeric CAD or CAE model. This
significantly reduces modelling complexity as in non‐trivial assemblies the assembly response function may be
analytically intractable.
42
The PCE method is based on a representation of the response function of a stochastic
system as a multi‐dimensional, orthogonal polynomial expansion in stochastic variables.
The PCE method offers the potential to be significantly more efficient than sampling based
UQ methods such as MC simulation by showing exponential convergence of in the
estimation error of the mean and standard deviation. Furthermore the method can be
applied in a non‐intrusive manner to problems where the system response function is
implicitly defined.
The PCE method will be considered in further detail in Chapter 5 of this dissertation.
2.6 Tolerance synthesis
There is some ambiguity associated with the usage of the term tolerance synthesis in the
literature. Tolerance synthesis in its most fundamental form may refer to the process of
allocating part tolerances that satisfy the functional requirements of an assembly, without
consideration of optimality. The solution procedure can be carried out by manual iteration.
A more sophisticated approach is to consider tolerance synthesis as a single‐objective
optimization problem with the objective of maximizing yield, where the minimum yield
requirements acts as a constraint. Alternatively, the single‐objective may be to minimise the
total cost of allocated tolerances.
Tolerance synthesis may also refer to a more challenging multi‐objective optimization
problem that is subject to the competing objectives of minimising tolerance cost and
maximising yield (or product quality) within the constraints imposed by the product design
requirements and manufacturing process characteristics (Hong et al. 2002). The definition
of tolerance synthesis adopted in this dissertation is:
Tolerance The process of optimally allocating part tolerances in a product assembly to
synthesis: maximize assembly yield (or product quality) and minimize tolerance cost
within the constraints imposed by the product design requirements and
manufacturing process characteristics
In such a tolerance synthesis problem, a tentative set of tolerances is analysed to determine
if the assembly yield requirements are met. If yield targets are not achieved, a more precise
set of tolerances needs to be selected. If yield targets are exceeded, a less precise (and
therefore less costly) set of tolerances may be selected. An optimization algorithm guides
the search for a set of tolerances which offer balanced performance in achieving both the
objectives of maximising yield and minimising tolerance cost.
43
A tolerance synthesis problem requires the application of a number of analysis and
simulation techniques, including: optimization algorithms; tolerance analysis and associated
UQ method; and, yield and cost‐tolerance estimation approaches. A number of tolerance
synthesis methods have been proposed with a range of different analysis techniques.
Early tolerance synthesis approaches were based on deterministic optimization algorithms
(such as non‐linear programing) involving a single cost‐tolerance function (Speckhart 1972;
Spotts 1973; Wilde et al. 1975; Ostwald et al. 1977; Parkinson 1982; Parkinson 1985). A
comprehensive survey of deterministic tolerance synthesis can be found in (Feng et al.
1997). Research has also focused on addressing a discrete tolerance synthesis problem
involving the most economical selection of manufacturing process from alternative cost‐
tolerance functions (Dong et al. 1989; Chase et al. 1990; Dong et al. 1990; Dong et al. 1991;
Zhang et al. 1993; Dong et al. 1994; Roy et al. 1997; Zhang et al. 1997)
Another area of research involves the application of concepts associated with Taguchi’s
quality loss functions (Section 2.2.3) to tolerance synthesis (Askin et al. 1988; Jeang 1994;
Anwarul et al. 1995; Jeang 1999; Cho et al. 2000; Choi et al. 2000; Feng et al. 2001).
Extensions to the concept involve incorporation of customer objectives into the tolerance
synthesis problem in addition to manufacturing quality concerns (Soderberg 1993;
Soderberg 1994).
To address the limitation associated with deterministic optimization algorithms, such as
difficulty in accommodating discrete variables and multiple objectives, as well as poor global
optima search performance, a high interest exists in the application of metaheuristic
optimization algorithms to tolerance synthesis. Metaheuristic algorithm offer attractive
performance as they are typically not limited by assumptions about the problem being
optimized (such as smooth objective functions or continuous variables) and can search large
spaces of discrete candidate solutions efficiently (Deb 2004) (Section 2.6.1). A number of
applications of metaheuristic algorithm have been reported in the tolerance synthesis
research community:
Simulated annealing (Zhang et al. 1993; Dupinet et al. 1996)
Particle swarm optimization (Steiner et al. 2003; Zhou et al. 2006)
Evolutionary algorithms (Carpinetti et al. 1995; Iannuzzi et al. 1995; Kanai et al. 1995; Ji
et al. 2000; Forouraghi 2002; Singh et al. 2004; Kumar et al. 2007; Mazur et al. 2011)
Genetic algorithms have been widely adopted due to their robust performance attributes
and will be applied in this dissertation for tolerance synthesis (Holland 1992; Bäck 1996; Gen
et al. 2000; Haupt et al. 2004).
44
The major challenge in realizing effective tolerance synthesis is the impractically large
computational cost required as the levels of complexity of the mechanical assembly under
analysis increases (Hong et al. 2002; Wenzhen et al. 2009; Wu et al. 2009; Rao et al. 2011).
Attempts to decrease this computational cost have focused mainly on reducing the solution
time of the associated tolerance model with simplifying assumptions, or surrogate
approximate models. Introducing such simplifications can, however, limit modelling
accuracy with excessively conservative solutions, or limit ability to address more general
problems not compatible with the associated approximations (such as linearized assembly
response functions, parameter distributions with only a single distribution type or
symmetric tolerance specification limits) (Singh et al. 2009; Wenzhen et al. 2009; Wu et al.
2009). However, the overall computational cost of tolerance synthesis can be lowered not
only by reducing the solution time of the tolerance model, but also by reducing the number
of iterative simulations required as part of the adopted UQ method. There is a potential to
exploit this opportunity with the application of more efficient analytical UQ methods which
have recently been developed (for example Section 2.5.3). This opportunity is explored in
Chapter 5 of this dissertation.
2.6.1 Optimization
The aim of optimization is to select a best solution to a given problem objective from a set
of candidate alternatives within any associated constraints. The solution procedure typically
involves minimizing or maximizing an objective function (which defines the problem under
consideration) by systematically evaluating alternative candidates and using the objective
function response values to guide a search for a superior solution. A feasible solution that
minimizes (or maximizes, if that is the goal) the objective function is referred to as an
optimum solution.
An optimization problem is characterized by the following attributes:
1. Design parameters
A design parameter is a controllable input variable which influences the response of the
objective function. In tolerance synthesis the design parameters are part feature tolerances.
A design parameter may be continuous (such as a nominal part dimension), discrete (such as
the cost associated with several alternative cost‐tolerance curves for a given tolerance).
Problems may involve a mixture of continuous and discreet design parameters. Continuous
variables optimization problems are typically less difficult to address (Deb 2004).
2. Constraints
45
A constraint is a condition that must be satisfied in order for a design to be feasible. An
optimization problem may have a closed domain which limits the possible candidates to be
evaluated (referred to as a constrained optimization problem) or an open domain where the
choice of candidates is unrestricted (unconstrained optimization problem). An example of a
tolerance synthesis constraint is a minimum allowable yield requirement for manufactured
product assemblies. Constraints can be accommodated explicitly by an optimization
algorithm or may be integrated into the objective function using the method of Lagrange
multipliers (Avriel 2003).
3. Objectives
An objective is a parameter that is to be maximized or minimized. The objective function
describes the relationship between the input and output parameters of the problem under
consideration. Depending on the field of application, the objective function may also be
referred to as a cost function or a utility function. In the tolerance synthesis field the
objective function is analogous to an assembly response function (Section 2.3). An
optimization problem may involve a single objective (such as maximising assembly yield) or
multiple objectives (such as maximising assembly yield and minimising tolerance cost). A
multi‐objective problem typically does not have a single optimum solution due to competing
objectives. Instead there exists a best set of solutions, known as Pareto‐optimal solutions,
which offers equivalently optimal performance, as superior performance for one objective
results in a compromise for another other objective (Deb 2004). Furthermore, optimization
problem may involve both local and global optimum solutions. A local optimum is a high
performing candidate design within a particular region of the design space. However, other
regions exist which offer superior overall performance. A solution which maximizes
performance over the entire design space is known as a global optimum. Designers are
ideally interested in finding global optimum solution (Nocedal et al. 1999).
4. Models
The objective function is typically characterized by an explicit or implicit design model. For
example, in tolerance synthesis the objective function (assembly response function) is
typically characterized by a CAT model of the product assembly (Section 2.7). The
comprehensiveness of the model is typically subject to the competing requirements of
model fidelity and analysis time.
5. Optimization algorithm
An optimization algorithm aims to intelligently search the feasible design space to identify
the best solution from the possible candidates (Section 2.6.2).
46
A review of optimization algorithms is presented in the following section. The review
outlook is on the ability to address the thesis objective of tolerance synthesis of assemblies
subject to loading, within the modelling environment of existing standalone CAD/E tools.
2.6.2 Optimization algorithms
Deterministic algorithms are generally based on identify local gradients of the objective
function (gradient‐based methods) or curvatures (direct search methods) which may lead to
a maximum or minimum (Avriel 2003; Audet et al. 2006). Deterministic optimization
methods are well suited to objective functions which are continuous and smooth (for
example, gradient‐based methods require the derivative of the objective function to be
known). An advantage of deterministic algorithms is efficient local convergence when the
search in near a globally optimum point. However deterministic algorithms may dwell near
an identified local optimum and have difficulty finding a global optimum (Nocedal et al.
1999).
Metaheuristic methods are generally based on a stochastic approach which introduces
randomness into the optimization search process in order to explore the design space more
comprehensively by prevention of dwelling at local optima. A significant number of
metaheuristic methods have been inspired by probabilistic physical phenomena which
47
exhibit optimization properties. For example, some of the most utilised algorithms are
based on swarm intelligence of ants (Ant colony optimization (Dorigo et al. 2006)), annealing
of metal (Simulated annealing (Kirkpatrick et al. 1983)), and biological evolution
(Evolutionary algorithms (Holland 1992)). Of the metaheuristic methods, evolutionary
algorithms consistently perform well in many optimization problems and have been
successful applied in many fields of engineering (Fonseca et al. 1995; Bäck 1996; Zitzler et al.
1999). Among the evolutionary algorithm techniques, Genetic Algorithms (GA) show good
performance in tolerance synthesis problems as discussed in the following section.
2.6.2.1 Genetic algorithm (GA)
Genetic Algorithms (GA) are popular optimization algorithms inspired by the processes of
evolutionary biology. In general GA algorithms work in the following fashion (Holland 1992;
Bäck 1996; Gen et al. 2000; Haupt et al. 2004):
1. A random population of design parameters is initialised and coded into binary string
structures referred to as chromosomes.
2. The objective function is subsequently evaluated for the random population of design
parameters. The fitness of the design parameters (which is a measure of how well
objectives are satisfied) is evaluated.
3. The population of individuals is then improved through manipulations that are
analogues for the mechanics of natural selection. Commonly three operators are used:
selection, crossover and mutation. The selection operator chooses the best candidates
from the previous generation based on their fitness. This selection leads to the
formation of an intermediate candidate population, from which a subsequent
population (also referred to as a generation) is created. This population (or next
generation) is created by applying the crossover operator to the intermediate
candidates which combines the binary string structures of two or more candidates
(referred to as parents) to form a new candidate (offspring). The crossover process is
analogous to reproduction and biological crossover and is based on the notion that the
fitness of the offspring candidate has a probability chance of exceeding that of its
parents. To ensure comprehensive exploration of the design space by avoiding dwelling
at local optima, the last mutation operator aims to slightly perturb the evolution of
generations to avoid premature convergence. The mutation is achieved by creating
random bit variation in some binary string structures of the coded design parameters.
4. The outcome of the manipulation process is a new population of design parameters,
with a commonly increased fitness (although not guaranteed) due to the filtered
48
selection of best performers in the previous generation. The new population is once
again evaluated for fitness and subsequently re‐manipulated.
Multiple iterations, or generations, of the process are carried out until a termination
requirement is passed such as a limit on the number of generations, or when diversity in the
objectives space becomes rare with subsequent iterations.
Genetic Algorithms are well suited to tolerance synthesis problems due to (Iannuzzi et al.
1995; Forouraghi 2002; Singh et al. 2004; Kumar et al. 2007):
Ability to accommodate both continuous and discreet variables (useful when multiple
discreet cost tolerance curves are used).
Good global optimum search perspective in large objective spaces.
Multi‐objective optimization capability with large number of variables.
Ability to accommodate implicit objective functions or models (e.g. CAD/CAT assembly
models).
Roust against discontinuities, fluctuations or noise in objective functions or models.
Due to their desirable characteristics GA algorithms will be applied in this work for tolerance
synthesis (Chapter 5).
2.7 Computer Aided Tolerancing (CAT) tools
A number of commercial Computer Aided Tolerancing (CAT) tools are available which offer
tolerance analysis and synthesis capabilities either as independent software packages, or
more commonly through integration with popular commercial CAD systems (Makelainen et
al. 2001). Some of the available tools include:
CETOL (Sigmetrix)
eM‐TolMate (Tecnomatix)
VisVSA (UGS)
3DCS (Dimensional control systems)
Mechanical Advantage (Cognition software)
TolAnalyst (Solidworks ‐ Dassault Systemes)
CATIA.3D FDT (CATIA ‐ Dassault Systemes)
A few of the tools are based on the tolerance modelling methods discussed in Section 2.3.2.
For example CETOL was originally based on the Vector‐Loop method, whereas CATIA.3D FDT
is based on the TTRS model (Section 2.4.3); however it is difficult to discern in all cases
which methods are applied due to the proprietary nature of these commercial tools.
Nevertheless, most CAT tools utilize an independent model abstracted from an underlying
49
CAD model to represent tolerance variation. A typical CAT software tolerance modelling,
analysis and synthesis approach involves (Prisco et al. 2002):
1. Definition of CAD models for each part of the product assembly.
2. Importing of the CAD model into the CAT system and interactively creating tolerance
modelling geometry superimposed on the original CAD data.
3. Specification of tolerance types for features of interest on each part in the assembly.
4. Definition of part relationships that constitute the assembly such as assembly sequence
and mating conditions.
5. Specification of KPCs (e.g. assembly clearances) which must be satisfied in order to fulfil
design requirements.
6. Simulation of the effect of part tolerances on KPCs using a stochastic or worst‐case
tolerance analysis approach.
7. Recording outcomes such as yield and associated tolerance cost.
8. Possible sensitivity analysis to determine most influential part tolerances contributing to
variation in assembly KPC.
9. Subsequently, based on analysis outcomes and CAT tool capabilities, re‐allocating of part
feature tolerances to target the total allowable variation in KPCs. The allocation may be
achieved manually, or automated through tolerance synthesis aimed at maximising yield
and/or minimising tolerance cost. The manufacturing cost associated with a particular
tolerance is typically predicted using cost‐tolerance functions (Section 2.2.3.1)
A number of comparative review and survey works of CAT systems have been presented in
the literature. Initial works focused on the limitations of the two‐dimensional geometry
capabilities of the then contemporary systems (Turner et al. 1991). Other review works
focused on CAT tools developed from a research perspective (Chase et al. 1991). A number
of recent reviews offer a more detailed overview of the nature of contemporary commercial
CAT tools (Salomons et al. 1998; Prisco et al. 2002; Chiesi et al. 2003; Zhengshu 2003; Shen
et al. 2005; Shah et al. 2007). A particularly comprehensive review of some of the most
popular CAT tools has noted a number of common capabilities as well as shortcomings
(Prisco et al. 2002). Additional investigation carried out in association with this dissertation
has identified changes in the CAT tool capabilities since the previously published reviews.
Furthermore, current limitations associated with uncertainty quantification method
capabilities and accommodations of assemblies under loading have been identified. Table
2.5 summarises the results.
50
Table 2.5 ‐ Comparison of commercial CAT tools.
Limitations in current CAT tools are identified in bold.
Commercial CAT software tool
Feature
CETOL eM‐TolMate VisVSA 3DCS
Tolerancing schemes
Dimensional yes yes yes yes
GD&T yes yes yes yes
Automatic utilisation
of CAD model defined GD&T no no no no
data
Tolerance analysis
Worst‐case yes yes yes yes
Statistical yes yes yes yes
Sensitivity analysis yes yes yes yes
Uncertainty Quantification methods
MC yes yes yes yes
Analytical (Advanced) no no no no
Tolerance synthesis capability
Synthesis capability yes yes yes yes
Multi‐objective
limited limited limited limited
optimization
Simplifying assumptions
Rigid body yes yes yes partial
Limit on variation size yes no no no
Integration
Operates outside
SolidWorks CATIA, NX I‐deas, CAD environment on CATIA, Unigraphics,
Compatible CAD tools
Pro/E Pro/E,NX translated models. STEP, IGES
STEP, IGES
Distributed/parallel
no no no no
computing
Integration with external
no no no no
CAE modelling tools
Limited
Accommodation of
no no no (compliant sheet
assembly loads
metal assemblies)
Despite the extensive capabilities of commercial CAT systems, some notable limitations
remain (Table 2.5). For example:
GD&T data defined in the CAD model is not able to be automatically imported into the
CAT system due to limitations with CAD geometry translator standards such as STEP or
IGES (ISO 2002).
None of the currently available tools offer distributed/parallel computing capabilities
which can offer reduced analysis times by distributing simulations over multiple
computers.
Lack of ability to accommodate general tolerance analysis and synthesis problems
involving assemblies under loading. CAT tools have been identified that accommodate a
limited subset of physical phenomena such as deformation of sheet assemblies (3DCS
51
from Dimensional Control Systems). However, in general the abstracted geometric
model employed in current CAT systems become incompatible when dealing with
tolerance analysis and synthesis involving a general class of problem requiring the
numeric simulation of assemblies under loading conducted on CAE models (such as FEA,
CFD, or multi‐body dynamics simulations).
Chapters 4 and 5 of this work address some of the identified limitations through the
application of Multi‐disciplinary Design Optimization (MDO) principals and Process
Integration and Design Optimization (PIDO) software tools (Section 2.8) to tolerance analysis
and synthesis.
2.8 Process Integration and Design Optimization (PIDO)
Multi‐disciplinary Design Optimization (MDO) is an active research field concerning product
design problems involving disparate engineering disciplines. In general MDO can be defined
as a formal methodology that facilitates the integration of interdisciplinary knowledge and
tools to achieve better engineering of overall system design. MDO consists of a number of
conceptual elements including; system modelling, design oriented analysis, approximation
concepts, optimization procedures, system sensitivity analysis, and human interface
(Sobieszczanski‐Sobieski et al. 1997).
To aid in MDO, a range of Process Integration and Design Optimization (PIDO) software tools
have been developed. PIDO tools are software frameworks for facilitating the integration of
diverse, discipline specific CAE analysis tools (e.g. FEA or CFD software) for process
scheduling, design of experiments, optimization and statistical simulation analysis.
Interaction between standalone CAE software tools is achieved with parallel or serial
connections and conditional switches, thorough commonly embedded scripting capabilities
(based on scripting languages such as JavaScript, Visual Basic, Python or DOS script) (Flager
et al. 2009). The scripting capabilities of CAE tools allow for autonomous:
modification of model parameters
initialisation of simulations
recording of the obtained simulation results
The available PIDO tools include both commercial as well as research focused software
packages, some of these are listed below:
modeFRONTIER (ESTECO)
ModelCenter (Phoenix Integration)
iSIGHT (Engineous Software)
52
Epogy (Synaps)
OPTIMUS (LMS)
FIDO (developed at NASA)
MINOS (Stanford Business Software)
Infospheres Infrastructure (California Institute of Technology)
DAKOTA (Sandia National Laboratories)
The various tools differ in:
Degree of interface ability with external CAE software tools and associated models
Number of possible concurrent interfaces
Design Of Experiments (DOE) implementations
Optimization capabilities (single or multi‐objective, algorithm implementation)
Statistical simulation and analysis capabilities
Data management, exchange, recording, monitoring, visualization and interpretation
capability
Meta‐modelling ability (see Section 2.5.1 for additional details about meta‐modelling)
Distributed/parallel computing capabilities
Level of automation
Computational overheads
Accessibility of user interface and learning curve
Collaborative design support
User support
Extensive evaluations and comparisons of some of the available tools have been reported in
the literature (Sobieszczanski‐Sobieski et al. 1997; Kodiyalam 1998; Malone et al. 1999;
Padula et al. 1999; Kodiyalam et al. 2000; Malone 2001; Piperni et al. 2004; Parashar et al.
2007; Simpson et al. 2008; D. et al. 2009; Flager et al. 2009; Adams 2011). The published
comparisons, in addition to investigations carried out as part of this dissertation, reveal that
significant advances have been made by the MDO research community and PIDO tool
developers in broadening integration, optimization and statistical analysis capabilities.
The emerging abilities of contemporary PIDO tools to integrate standalone CAD and CAE
models with automated parametric control, statistical analysis and multi‐objective
optimization offer attractive capabilities for design analysis and refinement. These
capabilities may offer novel opportunities to enhance the design process, in particular
aspects of tolerance analysis and synthesis problems. These opportunities are discussed in
further detail in the following chapters.
53
2.9 Tolerance analysis and synthesis of assemblies subject to loads
The functionality of many real‐world assemblies is often defined by how the assembly
behaves in response to some applied action, such as a force, temperature change or
electromagnetic interaction. These actions are generally referred to as loads. Predominant
loads in the design of mechanical assemblies are internally or externally applied forces.
External forces are independent of part mass and are applied to the part boundary; for
example friction and contact forces. Internal forces occur due to inertial effects and are
applied through the centre of mass; for example gravitational and dynamic forces. Internal
and external forces typically act to influence assembly dimensions dependent on part
compliance or assembly functions dependent on friction and dynamic effects (Bedford et al.
2008).
Accommodating the effects of loads in tolerance analysis and synthesis is an active research
field. The main focus of research has been on tolerance analysis of compliant assemblies
with sheet metal assembly applications in mind. The focus corresponds to the high interest
in the quality control of fit and finish of sheet metal body panel assemblies in automotive,
aerospace and domestic appliance applications. The need for a different tolerance analysis
approach for compliant assemblies was highlighted by researchers attempting to model
tolerance stack‐up in automotive body panel assembly lines using production data. Common
assumptions of part rigidity and linear addition of variance in tolerance analysis were
inaccurate and a need for a more realistic modelling approach was required (Takezawa
1980).
The proposed approaches for accommodating forces in tolerance analysis have often
adopted Finite Element (FE) models for modelling effects associated with forces such as
compliance (Zienkiewicz et al. 2005). Pioneering work in tolerance analysis of compliant
assemblies focused on the assembly of sheet metal with rigid frames (Liu et al. 1995; Liu et
al. 1996; Liu et al. 1997). The approach consisted of modelling the compliance using an
approximated FE model of the sheet and frame assembly constructed with the application
of the method of influence coefficients (Levy 1953). Uncertainty quantification was carried
out using the root sum of squares method (Section 2.5.2.1), applied to an approximating
linear response function. The method was later expanded to consider variation in
production scenarios involving multi‐station assembly systems (Camelio et al. 2003).
Another approach involved investigating the use of surrogate polynomial functions for
representing surface variation in sheet metal to simplify the modelling of the assembly
response function (Merkley et al. 1996). The use of surrogate polynomial functions results in
54
an explicit assembly response function avoiding the need for FE modelling. The approach
was later generalised to broader applications such as tolerance analysis of compliant spring,
beam and flanged plate assemblies (Merkley 1998). A contribution to this method was also
developed for accommodating surface waviness tolerances of sheet parts using spectral
analysis techniques (Bihlmaier 1999). This approach reduces the need for computationally
expensive FE simulations, however it is founded on a number of specific assumptions such
as: small variations in geometry have no effect on the part stiffness, all part compliance is
linear, and all tolerances are uniform tolerances. These assumptions can be limiting in
broader applications.
A number of additional contributions have been made to tolerance analysis of assemblies
under loading for broader application scenarios. One contribution focused on examining the
variation in stress and compliance arising in bolted connections subject to fastener and hole
alignment tolerances (Gordis et al. 1994). The associated solution procedure utilized
mapping of assembly variation into the frequency domain, with a reduced representation of
the assembled components, to solve a FE frequency domain structural analysis problem. In
other work, the variation in static loading behaviour of composite materials subject to
manufacturing variation was investigated with composite FE models and Monte Carlo based
UQ (Vinckenroy et al. 1995). As similar approach was used in investigations of structural
behaviour subject to large material property variations (Elishakoff et al. 1999).
Consideration of tolerances in free form compliant composite surfaces has also been
investigated (Polini 2011). More recent contributions include a method of simplifying the
geometry of FE models used in tolerance analysis with surrogate models constructed from
simplified compliant beam structures (Shiu et al. 2003) as well as a linearization FE
simplification for tolerance analysis of compliant kinematic linkage mechanisms (Imani et al.
2009).
Some additional contributions to the tolerance analysis of assemblies under loading are
summarised here:
The integration of tooling variation into a tolerance analysis problem involving compliant
assemblies (Hu et al. 2001).
Consideration of the effect of thermo‐mechanical dimensional variation (Pierre et al.
2009). The procedure assumes rigid parts with two possible geometrical states – one
nominal and one under thermal strain. The thermal strain model is resolved using an FE
model. The two different possible states are used in an analytical tolerance analysis
procedure.
55
Simulation of the effects of welding distortion in tolerance analysis (Lee et al. 2009). The
approach includes the initial creation of a database of expected variation in welding
distortion for different weld parameters using FE welding simulations. The database is
utilised during the UQ tolerance analysis process without the concurrent integration of
FE model simulations.
Adaptation of mesh morphing techniques for FE models of compliant parts in tolerance
analysis (Franciosa et al. 2011). Mesh morphing achieves part geometry changes by
reshaping FE mesh elements at the nodal co‐ordinate level without requiring underlying
CAD model updates. This ability reduces the computational cost associated with parts
CAD model rebuilds and re‐meshing. Limitations of this approach include restrictions on
the allowable magnitude of geometric changes and the continued propagation of any
inherent problems or limitations which exist in the original source mesh (Owen et al.
2010).
The main challenge in the above approaches to accommodating the effects of loading in
tolerance analysis has been the development of a realistic and computationally efficient
tolerance model. Research efforts have focused on reducing the computational cost of the
tolerance model with simplifying assumption, surrogate FE models of approximated
geometry, or with approximating analytical assembly response functions. However, as
statistical tolerance analysis involves uncertainty quantification techniques requiring
iterative evaluations of the tolerance model (Sections 2.5), the overall computational cost
can be lowered not only by reducing the solution time of the tolerance model, but also by
reducing the number of iterative simulations required as part of the adopted UQ method.
The UQ methods applied in the above approaches have mainly included either linearized UQ
techniques (such as root sum of squares (RSS), Section 2.5.2.1) or sampling based methods
(such a Monte Carlo simulation, Section 2.5.1.1). The effectiveness of these methods at
lowering computational cost is however limited either by their poor efficiency (MC ‐ Section
2.5.1.1) or accuracy (RSS ‐ Section 2.5.2.1). There has been a limited emphasis on
investigating the potential for achieving overall computational cost reductions in tolerance
analysis with the application of more efficient analytical UQ methods which have recently
been developed (for example Section 2.5.3).
Moreover, the tolerance analysis approaches presented in literature for modelling the
effects of loading, predominantly focus on a specific problem scenario such as sheet metal
compliance or welding distortion. Additionally, they require the use of customised
simulation codes and have seen limited implementation in practical tools available for
56
industry (such as in CAD and FE modelling software). The methods which have been
implemented in available tools (such as tolerance analysis of certain sheet metal assembly
types in the CAT software 3DCS; Table 2.5) are limited to specific scenarios and cannot
accommodate a general class of problem involving variation in assemblies under loading (for
example general structural compliance, multi‐body dynamics, or kinematics).
As such, there exist limitations in the existing methods for accommodating the effects of
loads in the tolerance analysis of assemblies under loading.
57
2.10 Summary of outcomes and opportunities for further work
This chapter has identified the current state of understanding of the knowledge domains
associated with the research scope of this work. Using the expanded understanding that
resulted from this literature review, a number of limitations in existing methods, as well as
gaps in domain knowledge have been identified. The identified limitations offer a number of
research opportunities; these are categorically identified below.
Process integration and design optimization:
A review of available PIDO tools (Section 2.8) reveals that significant advances have been
made by the MDO research community and PIDO tool developers in broadening
multidisciplinary integration, optimization and statistical analysis capabilities.
These emerging capabilities of PIDO tools may offer novel opportunities to enhance the
engineering design of mechanical assemblies involving uncertainty or variation in design
parameters, in particular, aspects of tolerance analysis and synthesis problems.
This opportunity will be investigated further in Chapters 3, 4 and 5.
Effects of loads in tolerance analysis:
A number of analytical and numerical methods have been proposed for addressing
tolerance analysis problems in complex product assemblies subject to loads. However, a
review of these methods has identified a number of limitations, including (Section 2.9):
Ability to accommodate only single, specific applications (such as sheet metal
compliance or welding‐distortion).
Reliance on specific, custom simulation codes for tolerance modelling with limited
implementation in practical and accessible software tools.
Need for significant additional expertise in formulating specific tolerance models and
interpreting results.
58
There remains a lack of a more accessible approach to tolerance analysis of assemblies
subject to loads, which can integrate into the established CAD and CAE modelling design
framework with lower implementation demands.
Computer aided tolerancing tools:
Numerous Computer Aided Tolerancing (CAT) software tools have been proposed for
addressing tolerance analysis and synthesis problems in complex mechanical assemblies
(Section 2.7). Despite the extensive capabilities of existing CAT systems, some notable
limitations remain. For example:
Lack of ability to accommodate general tolerance analysis and synthesis problems
involving assemblies subject to loading. CAT tools have been identified that
accommodate a limited subset of loading effects such as deformation of sheet
assemblies. However, in general the abstracted geometric model employed in current
CAT systems are incompatible when dealing with tolerance analysis and synthesis
involving a general class of problem requiring the numeric simulation of assemblies
under loading.
The ability of CAT tools to effectively address sophisticated optimization problems (such
as tolerance synthesis in complex assemblies) with many competing objectives and
constraints is relatively limited compared to dedicated optimization tools.
GD&T data defined in the CAD model is not able to be automatically imported into the
CAT system. Consequently, significant additional expertise and effort is required for
formulating CAT specific tolerance models and interpreting simulation results.
None of the currently available tools offer distributed/parallel computing capabilities
which can offer reduced analysis times by distributing simulations over multiple
computers.
Chapters 4 and 5 of this work focus on addressing these limitations.
Tolerance synthesis and uncertainty quantification:
Tolerance synthesis requires iterations of tolerance analysis which significantly compounds
computational costs, particularly if numerical modelling of the effects of loading on
mechanical assemblies is required. A main contributor to high computational cost has been
the traditional approach to Uncertainty Quantification (UQ) in statistical tolerance analysis
which is reliant on robust yet inefficient sampling based UQ methods such as Monte Carlo
(MC) sampling. Attempts to decrease this computational cost have focused mainly on
reducing the solution time of the associated tolerance model with simplifying assumption,
59
surrogate FE models of approximated geometry, or with approximating analytical assembly
response functions (Sections 2.6 and 2.9). Introducing such simplifications can however limit
modelling accuracy and versatility of the method.
However, alternative UQ methods with significantly higher computational efficiency than
sampling based methods have recently seen extensive development (Section 2.5.3). A
broadly applicable method is Polynomial Chaos Expansion (PCE). Investigation into the utility
of these methods in tolerance analysis and synthesis has been limited. There is a potential
to exploit this opportunity to significantly reduce the cost of UQ in tolerance analysis, and
thereby increase the practical feasibility of tolerance synthesis in complex mechanical
assemblies. This opportunity is explored in Chapter 5 of this dissertation.
60
3 DEVELOPMENT OF ENHANCED PIDO
METHODS FOR DESIGN ANALYSIS AND
REFINEMENT
3.1 Chapter summary
The early stages of product development typically involve a lack of knowledge or certainty,
as influential properties may remain uncertain or poorly defined. This chapter identifies
opportunities to enhance the conceptual and embodiment stages of design involving
uncertainty or variation in design parameters. This is achieved by developing novel methods
for the use of PIDO tools in the analysis and refinement of concept design embodiments
with sensitivity analysis, tolerance analysis, DOE methods and optimization. Practical
conceptual and embodiment design problems are considered and effective solutions
developed for a number of industry focused scenarios. The outcomes allow designers to
make informed decisions which positively influence the design early in the design process
while cost commitments are low. The additional design knowledge acquired in one industry
focused design problem influenced design decisions which resulted in class‐leading
performance of subsequently commercialised product.
3.2 Introduction
Engineering design is a creative and iterative process focused on the development and
implementation of structures, machines or systems. The design process can be represented
as a series of interconnected and interdependent stages (Pahl et al. 2007) (Figure 3.1),
including:
Planning and task clarification:
Definition of technical criteria such as functional requirements, design objectives and
design constraints
Definition of economic criteria
61
Conceptual design:
Establishment of functional structures
Exploration of feasible solution principles
Combination of solution principles into concept variants
Feasibility assessment against technical and economic criteria
Embodiment design:
Development of concept variants into more comprehensive concept design
embodiments (layouts)
Exploration of the design parameter space for design refinement
Benchmarking of concept designs
Selection of best performing concepts
Detail design:
Detailed specification
Detail models and drawings
Specification of production
The first stage of the design process is aimed at clarifying and comprehensively formalising
the problem under consideration. A subsequent search for solution principles is carried out
and concept variants are established (these are combinations of working principles and
structures which may offer a solution to the problem). The feasibility of concept variants is
assessed against technical and economic criteria. Feasible concepts are developed further
into more comprehensively detailed concept design embodiments (also referred to as
layouts) by exploring the design parameter space to identify: regions of desirable
performance, design weaknesses or errors, sensitivity to disturbing influences and design
refinement opportunities. The refined concept design embodiments can then be subject to
further comparative assessment based on performance benchmarking; leading to a final
concept being selected for detail design specification and implementation.
62
Section 3.4
Computationally
efficient
manufacturing
sensitivity analysis
for assemblies with
linear‐compliant
elements
Section 3.5
Refinement of
concept design
embodiments
through PIDO based
DOE analysis and
optimisation
Section 3.3
Visualization method
for the identification of
KPCs based on
sensitivity analysis
Figure 3.1 ‐ Steps of planning and design process. Reproduced from Pahl and Beitz 2007 (Pahl et al. 2007).
Contributions of this chapter are identified in red.
Despite the seemingly serial nature of the design process, it is often iterative in practice
(Pugh 1991; Pahl et al. 2007). The conceptual and embodiment stages of product
development are typically associated with a lack of knowledge or certainty of concept
performance against technical and economic criteria, as many influential properties remain
undefined or uncertain (Suh 1990). As the design process progresses and knowledge of
concept performance is gathered through design analysis and refinement, the designer may
need to reconsider earlier decisions and iterate preceding design process stages.
63
However, the cost of implementing changes to design decisions increases non‐linearly with
the design process stages and life of the project (Ullman 2003). This non‐linear cost increase
is due to progressively accumulating commitments to high‐capital resources, such as
production plans, tooling, prototyping and testing (Figure 3.2 (ii)). Concurrently, as the
commitment to a specific design increases, the flexibility to enact changes to the design
diminishes (Figure 3.2 (i)). Consequently, a high proportion of the cost of delivering a
product can be attributed to the design decisions made during the conceptual and
embodiment design phases. In response to these observed relationships (Figure 3.2),
engineering design philosophy advises that available design development and analysis
resources should be focused towards the early stages of the design process such as
conceptual and embodiment design (Ullman 2003). This emphasis aims to maximise
knowledge of the design problem and concept performance when flexibility to enact
changes is high, and accrued costs are low. This approach has been referred to as Front End
Loading (FEL) (Artto et al. 2001; Connor et al. 2003; Twigge‐Molecey 2003) in reference to
the emphasised commitment of design and analysis resources early in the project timeline.
FEL is cited as allowing a significant improvements in the ability of a project to meet
performance, quality and cost targets (Batavia 2001; Connor et al. 2003).
(i) (ii)
Figure 3.2 ‐ (i) Design flexibility and knowledge versus project timeline
(ii) Cost commitment and accruement during phases of the design process, after (Ullman 2003).
Design analysis and refinement techniques provide an opportunity to increase knowledge of
the design problem early in the project timeline (Figure 3.2 (i)); allowing the designer to
make informed decisions while there is sufficient design flexibility to act without undue
expense (when accrued costs are low i.e. Figure 3.2 (ii)).
64
Design analysis and refinement techniques include:
sensitivity analysis;
tolerance/robustness analysis;
design of experiments (DOE);
optimization methods.
Despite the high benefit associated with comprehensive analysis and refinement early in
the design process, in practice it may be seen as difficult, time consuming or unreliable to
consider at a time where many influential properties remain uncertain (Smith et al. 1997;
Ebro et al. 2012).
This chapter provides contributions to overcome some of the difficulties associated with
implementing design analysis and refinement techniques in designs involving uncertainty or
variation in design parameters. In particular, the design refinement techniques for the
analysis of the effects of manufacturing variation at the conceptual and embodiment design
stage, may reduce the undesirable cost of managing poor quality later in the design cycle
following the commissioning of manufacturing (Soderberg 1994; Soderberg et al. 1999;
Taguchi et al. 1999). With this objective in mind, tolerance analysis at the conceptual design
stage provides insight into the sensitivity of alternative concept designs to manufacturing
variation and facilitates concept selection with quantitative measures of robustness (for
example Section 3.4). Similarly, sensitivity analysis may be applied to aid in the
identification and prioritization of key product characteristics (KPCs) (for example, Section
3.3) which measure the functional performance of the design, and are used for quality
control during manufacture (Section 3.2.4).
The benefits associated with the exploration of the parameter space of a design through the
utilization of DOE and optimization methods at the conceptual and embodiment design
stages are also notable (Askin et al. 1988; Sobieszczanski‐Sobieski et al. 1997; Simpson et al.
2008). For instance, DOE based parameter studies may reduce the potentially vast design
space of possible conceptual design permutations which can arise from a broad range of
design parameters whose feasible limits are yet to be fully defined. Similarly, optimization
has the potential to identify desirable regions of local and global optimum performance in
the presence of complex constraints and competing design objectives (for example Section
3.5).
Despite the potential benefits of design analysis and refinement techniques, the exploratory
nature of the conceptual design stage is often associated with limited analysis time budgets
for individual concepts, as time constraints often reserve the expense of comprehensive
65
analysis efforts for the detailed design stages (Darlington et al. 2002; Pahl et al. 2007). As
such, a justifiable use of design refinement techniques at the conceptual stage needs to
offer reasonably rapid implementation, low analysis cost, as well as accurate and reliable
outcomes. Furthermore, due to the multidisciplinarity of CAE modelling tools which may be
utilised in conceptual and embodiment design (such as alternative CAD packages or
simulation codes), the utilization of design refinement techniques needs to accommodate
integration with disparate CAE tools, without excessively burdening the designer with the
distraction of software integration.
The outlined requirements for the utilization of design analysis and refinement techniques
at the conceptual and embodiment design stages align themselves well with the emerging
capabilities of Process Integration and Design Optimization (PIDO) Software tools (Section
2.8).
This chapter identifies opportunities to enhance the conceptual and embodiment stages of
design involving uncertainty or variation in design parameters. This is achieved by
developing novel methods for the use of PIDO tools in the selection and refinement of
concept design embodiments with sensitivity analysis, tolerance analysis, DOE methods and
optimization.
The focus of contributions in the context of the design process is identified in Figure 3.1 and
described below:
Selection and refinement of concept design embodiments:
Comprehensive exploration of the parameter space is carried out with DOE analysis and
optimization to identify feasible and optimal regions of performance. These methods have
been developed to identify optimal designs for automotive seat kinematics (Section 3.5).
Evaluate against technical and economic criteria:
The effect of manufacturing variation on the performance of concept design embodiments
is achieved with efficient numerical models, tolerance analysis and sensitivity analysis.
These methods enable the benchmarking of the relative performance of alternative
conceptual automotive seat‐rail designs (Section 3.4).
Check for errors and cost effectiveness:
Design weaknesses, sensitivity to disturbing influences and design refinement opportunities
are investigated with sensitivity analysis through the identification of the key product
characteristics in a complex actuator assembly (Section 3.3).
66
The outcomes allow designer to make informed decisions to positively influence the design
early in the design process while cost commitments are low. Limitations identified in these
methods are the basis of the contributions of the following chapters.
3.2.1 PIDO tools
A range of PIDO software tools have been developed which integrate independent CAD and
CAE software through commonly embedded scripting capabilities. PIDO tools allow for:
autonomous modification of CAD model parameters, initialisation of CAE simulations, and
recording of simulation outputs; to facilitate automated parametric studies, statistical
analysis and multi‐objective optimization (Sobieszczanski‐Sobieski et al. 1997; Kodiyalam
1998; Kodiyalam et al. 2000; Simpson et al. 2008). These capabilities are described in detail
in section 2.11 and are applied to enhance design analysis and refinement of concept design
embodiments.
3.2.2 Accommodating manufacturing variation in conceptual and embodiment design
Addressing the effects of manufacturing variation early in the conceptual and embodiment
stages of design can help identify designs that are less sensitive to manufacturing variation
and consequently reduce quality costs at the manufacturing stage (Soderberg et al. 1999).
The effects of manufacturing variation on product functionality can be investigated with
tolerance analysis using virtual models (analytical or numerical) of the product assembly
(Section 2.5). However, a tolerance analysis problem requires the definition of assembly
parameters which indicate whether a given manufactured assembly will conform to the
intended functional requirements of the design. The parameters of particular relevance to
functionality are referred to as Key Product Characteristics (KPCs), and are typically
geometric, such as clearances or nominal dimensions2 (Section 3.2.4). However, the
complexity which arises in product assemblies with many interacting parts and features can
make the identification of assembly KPCs a challenging task for the designer.
2
The concept of KPCs can be extended to include other parameters such as internal or external loads, material
characteristics, or mechanical properties (Chapter 4 offers further detail).
67
3.2.3 Assembly complexity
The complexity of mechanical assemblies can be defined by the number of constitutive
components, the number of component interactions and the intricacy of component
features. These three aspects correspond to three fundamental elements defining
complexity in mechanical engineering design problems (Summers et al. 2010):
1. Complexity as size
Complexity may be defined as the size of the problem. For example, in the commonly cited
Kolmogorov definition, complexity is defined as the size of the shortest algorithm that can
unambiguously define a problem (Stonier et al. 1994; Li et al. 2008).
2. Complexity as interaction
Complexity may be defined as the degree of interaction within the elements of the problem.
This definition of complexity is a function of the interconnectedness between elements, for
example (Earl et al. 2001; Felgen et al. 2005).
3. Complexity as solvability
Complexity may be defined as the number of operations required to identify a solution that
satisfies the constraints and objectives associated with the problem. The solvability attribute
can be denoted as computational complexity (Wang et al. 2004).
Attempts at the integration of these inherent aspects of complexity for greater
understanding of mechanical assemblies are ongoing (Lesser 2000; Wang et al. 2004;
Summers et al. 2010).
The problem of predicting assembly behaviour subject to stochastic variation is linked to
complexity of interaction through the concept of diseconomy of scale; whereby a system
becomes unwieldy due to the burden of an increasing number of interrelationships (McAfee
et al. 1995; Lin et al. 2005). As the number of interacting parts increases, uncertainty in the
nominal values of assembly parameters can drastically raise the quantity of possible
distinguishable states of the assembly.
3.2.4 Key Product Characteristics (KPCs)
The large number of parameters and interacting components in complex product
assemblies makes it difficult to know which assembly parameters are indicative of
compromised functionality, and how deviations of parameters from the nominal intended
values affect the performance of the assembly. As it can be uneconomical to monitor and
control all product parameters at the manufacturing stage, key product characteristics are
68
commonly used to specify parameters especially critical to assembly functionality
(Darlington et al. 2002). There is no universally standardized definition of a KPC (Zheng et al.
2008), and various nomenclature is used to describe nuances of the concept within industry
(Lee et al. 1996). For instance, Key Characteristic (KC) is a term also commonly used to refer
to the concept. Despite the lack of a broadly established consensus, the concept of a KPC is
generally understood to be a product parameter whose deviation from a target value is
associated with comparatively high loss in product quality. Under common definitions, a
KPC is also associated with a requirement for quality monitoring and control of that
parameter during manufacturing (Singh 2003). An example of a broad definition of a KPC as
used in industry is one applied by General Motors Corporation; “a product characteristic for
which reasonably anticipated variation could significantly affect the product’s safety or
compliance with governmental standards or regulations, or is likely to significantly affect
customer satisfaction with a product” (Lee et al. 1996). In this work a KPC is defined as: “A
part or assembly parameter especially critical to product functionality and whose deviation
from a target value has a comparatively high loss in quality”.
A generalized implementation of KPCs in industry can be classified into two stages;
(i) identification and (ii) control. The identification of KPCs occurs at the design stage of the
product and is typically conducted using a top‐down approach of product analysis and
decomposition (Ertan 1998). The top‐down approach entails analysis of product functional
requirements and systematically decomposing the product architecture into individual
contributing features or parameters. Product parameters with high influence on assembly
functionality (parameters with a steep quality‐loss function) are classified as KPCs. A quality
control plan must subsequently be established for monitoring, verification, and variation
reduction of KPCs as the product is transitioned from design to manufacturing.
The identification stage of KPC implementation is particularly challenging. Quantitatively
identifying and prioritizing KPCs requires an accessible assembly tolerance model which can
be used to establish the range of variation in part parameters for which assembly
functionality is not compromised. However, an accurate numerical tolerance model may not
be readily available and can be considered excessively difficult to develop, particularly at the
conceptual and embodiment design stages (Thornton 1999). An analysis of physical
prototypes based on a DOE study can be considered as the only viable alternative (Phadke
1989). This approach however is expensive, slow and impractical, especially when a large
number of product parameters are to be considered.
69
The lack of an accurate assembly tolerance model may lead the designer to a conservative
approach to KPC specification. However, over‐specifying the number of KPCs may result in
uneconomical quality control demands during production. Quality control can become too
cumbersome, potentially requiring production personnel to assess an inordinate number of
parameters (Lee et al. 1996). Conversely, under‐specifying the number of KPCs can lead to
quality loss (Taguchi 1989).
3.2.5 Assembly response function modelling
Modelling the behaviour of a product assembly requires the definition of an assembly
response function which defines the relationship between part and assembly parameters.
The assembly response function may be explicitly defined through an algebraic expression
however with complex assemblies this may be difficult due to the large number and
complex interactions of parts. The assembly response function may, however, be defined
implicitly through a numerical model such as CAD assembly or CAE model. CAD based design
is well established, and as such, an implicitly defined assembly response function is usually
available to the designer through CAD models constructed in the conceptual, embodiment
and detail design stage. Inspection of the model using embedded dimension analysis tools
allows the designer to identify any geometric assembly parameters of interest.
3.2.6 CAD tools
CAD modelling software is inherently parametric. Operations such as extrusions, sweeps and
lofts applied to parametric two‐dimensional sketches are used to create geometric three‐
dimensional solid part models. CAD assemblies are defined by multiple CAD parts with
constrained interaction relationships that are re‐evaluated if any part parameters are
modified.
Commercial CAD tools also feature the ability to execute scripted instructions (i.e. macros)
without user input. The CAD software CATIA for instance is able to execute macro
instructions programmed in Visual Basic script; SolidWorks can execute Python script
instructions. The parametric design and scripting capabilities provide a means of
implementing autonomous analysis of CAD model parts and assemblies.
CAD tools commonly provide assembly clash detection capabilities which report the location
and magnitude of unintended part contact or interference. This interference detection
capability is applied in this work to identify assembly regions where unforeseen part feature
interference may violate functional requirements.
70
3.2.7 Uncertainty quantification strategy
To identify the effects of part parameter variation on the assembly in a virtual model
environment, a method of propagating the expected part parameter distributions through
the assembly response function is required. A variety of Uncertainty Quantification (UQ)
methods have been developed (Section 2.5). Monte Carlo simulation was applied in this
work as it is robust and readily implemented. The Monte Carlo method consists of the
generation of a feasible set of part parameter values from a random sample of their
respective probability distributions, and the evaluation of the resulting assembly response.
As the number of evaluations increases, the resulting sample distribution of assembly
responses approaches that of the population. Approximately 1000 samples are required to
provide acceptable accuracy in an assembly tolerance analysis problem (Gao et al. 1995).
Monte Carlo simulations of 3000 evaluations were applied in the test case used in this
research (Section 3.3.2) to ensure reliable outcomes.
71
3.3 Visualization method for the identification of KPCs based on sensitivity analysis
Identifying assembly parameters, whose deviation from a nominal value significantly affects
assembly functionality, may not be obvious if a large number of parameters and interacting
components are present. The ability to identify such key product characteristics (KPCs) early
in conceptual and embodiment design increases knowledge about how assembly
functionally may be monitored and managed in the presence of manufacturing variation.
Design changes which improve the ability to manage variation can then be accommodated
at a project stage where design flexibility remains high and accrued cost are low.
Identifying KPCs can however be challenging due to the difficulty in predicting behaviour in
complex assemblies without an accessible assembly tolerance model (Section 3.2.4) and the
ability to visualize model behaviour under variation (Dahl et al. 2001). Although computer
aided tolerancing tools (CAT) are available for modelling variation in assemblies and aiding
in the identification of KPCs, they require additional tools, modelling and expertise (Section
2.7). The additional effort is associated with the requirement to create an abstracted
geometry model separate to any existing CAD model of the concept design before any
analysis can be carried out (Prisco et al. 2002).
To aid designers in identifying assembly KPCs at the concept stage with low additional
modelling effort, a PIDO tool based visualization method has been developed which involves
identifying and visualizing unforeseen and unintended part interactions within the native
CAD product design environment. The method utilises PIDO tools and functionality
commonly available within commercial CAD software to simulate manufacturing variation
effects on the part parameters of an assembly and monitor assembly clearances, contacts or
interferences. KPCs are identified though design analysis and refinement based on
sensitivity analysis
Since it is common practice for designers to carry out design modelling with the use of CAD
tools, a numerical CAD model of the design assembly is often readily available at the
conceptual and embodiment design stages. If the CAD model has been constructed to be
robustly parametric, and if assembly constraints have been defined which ensure the
required mating relationships between part features, then the CAD model may be adapted
as a assembly tolerance model for aiding in KPC identification with the utilization of PIDO
tools.
A method for adapting a CAD model as an assembly tolerance model for aiding in KPC
identification with the utilization of PIDO tools is proposed in Figure 3.3. It integrates the
72
concepts of PIDO tools (Section 3.2.1), assembly response function modelling (Section 3.2.5)
CAD tools (Section 3.2.6) and uncertainty quantification (Section 3.2.7).
Figure 3.3 ‐ Visualization approach for the identification of KPCs within the native CAD design environment
using PIDO tools.
Initially, the CAD part and assembly model parameters are integrated with a PIDO tool
workflow. Automated monitoring of assembly parameters potentially relevant to the
functionality of the assembly is established with the definition of measurements of
assembly dimensions such as clearances (this is facilitated by measurement tools embedded
in most CAD software). Model parameters are subsequently subjected to expected
manufacturing variation using UQ techniques such as Monte Carlo sampling. For each
assembly instance, assembly clash and interference analysis capabilities common in CAD
73
software are executed using a user script to automatically identify any unexpected part
interferences. The variation in model parameters results in variation envelopes of part
geometry which represents the potential range of part dimensions and positions within the
assembly. The simulated assembly models are automatically stored for reference. A
Student’s T‐test based effect size measure (Jackson 2011) is subsequently used to identify
the sensitivity of variation in each part parameter, to the monitored clearances or
unexpected interferences in the assembly. The frequency of interference between assembly
parts is calculated. The stored assembly models and sensitivity analysis results are
subsequently reviewed by the designer to visualize the effects of the variation envelopes
and address any identified problems. The identified problems may be corrected by adjusting
model geometry to allow space for variation, and monitored by designating the associated
assembly parameters as KPCs.
The method improves the general assembly robustness of concept design embodiments by
increasing the ability of the designer to monitor and suppress the effects of manufacturing
variation from the early concept stage.
3.3.1 Potential limitations
Although the method enables insight into unforeseen and undesirable part interactions, it
may suffer from limitations associated with applying parametric CAD for simulating the
effect of manufacturing variation. Limitation include such issues as adherence to geometric
and dimensional tolerancing standards and problems in simulating intermittent part contact
(Section 2.4.2) (Mazur et al. 2011).
Commercial Computer Aided Tolerancing (CAT) tools have been developed that can
overcome some of these identified issues, however this approach requires the development
of a specialised CAT model of the parts and assembly (Makelainen et al. 2001). The
proposed methodology is applied directly to the CAD models developed for concept design
and development, and can be directly integrated with the design process without the need
for additional modelling or separate CAT software.
74
3.3.2 Case Study 3.1 – Visualization method for identification of KPCs in a concept
design of an automotive actuator assembly
The visualization method presented in Section 3.3 was applied to a conceptual embodiment
of an actuator assembly to aid in the task of identifying unintended part interactions and the
associated assembly KPCs. The assembly is an actuator used for automated folding of
automotive vehicle side view mirrors. Due to the commercially sensitive nature of the
design, some specific details concerning the working of the actuator are not disclosed.
3.3.2.1 Process data
To estimate expected dimensional variations, the existing moulding process used for the
manufacture of the test case assembly components was analysed. The manufacturer
specifies assembly component dimensional tolerances on injection moulded components
according to standard recommended specification limits ((DIN) 1982).
The manufacturer’s injection moulding process was analysed by metrological inspection of
1600 production part samples from a commercial component with a comparable material
and manufacturing process to the product under analysis. Component samples from four
different moulding cavities were measured at five critical locations of equal target
dimensions. The results were combined together and process capability indices (Section
2.3.4) were determined for the overall moulding process performance across all
components and all cavities (Figure 3.4 and Table 3.1). The measured PCIs were used as
input for the test case analysis.
75
Figure 3.4 ‐ Histogram of measured production component used to establish PCIs
Table 3.1 ‐ Process capability data of measured component (Figure 3.4).
Results based on combined measurements across all locations and from all moulding cavities.
Parameter Target Achieved
Mean (mm) 16 15.986
σ (mm) ‐ 0.0457
LSL (mm) 15.850 ‐
USL (mm) 16.150 ‐
Cp 1.00 1.10
Cpk 1.00 1.01
Cpm 1.00 1.05
% < LSL 0.15 % 0.15 %
% > USL 0.15 % 0.02 %
% Total out of spec. 0.30 % 0.17 %
3.3.2.2 PIDO integration
The functional requirements of the actuator result in a design embodiment involving a
sophisticated kinematic interaction of a number of components with functional ramps, each
with a number of different relative arrangement scenarios. As such, the conceptual design
phase of the actuator required a comprehensive CAD model as a proof of concept of the
expected functionality. The parametric CAD assembly model of the concept actuator
assembly developed by the designers (Figure 3.5) was interfaced with a PIDO workflow
(Figure 3.6) in accordance with the proposed visualization method (Section 3.3). A feasible
set of part parameter values was initialised from a random sample of their respective
76
probability distributions and subjected to a Monte Carlo simulation (3,000 samples were
used).
Figure 3.5 ‐ Parametric CAD assembly model of concept actuator design
Figure 3.6 ‐ PIDO workflow for visualization methodology for identification of KPCs ‐ Actuator assembly.
77
3.3.2.3 Results
A number of undesirable part clash and contact instances were identified as part of the
simulation. The interference data recorded for each identified scenario detailed the nature
of part interactions (whether a contact or clash). Images were recorded for each interaction
showing the relevant region of the assembly. The frequency of interference between part
pairs was determined (Figure 3.7). A Student t‐test was conducted to quantify the sensitivity
of part parameter variation to the number of interference scenarios occurring (Figure 3.8). A
positive effect size indicates that an increase in the parameter value results in interference,
a negative effect size indicates a negative relationship. Sensitivity is indicated by magnitude.
Number of Interferences
Interaction Part Pair
Figure 3.7 ‐ Frequency of interference between assembly parts
78
Interference Sensitivity (Effect Size)
Part Parameter
Figure 3.8 ‐ Part parameter sensitivity to interference
The simulation results identified assembly regions prone to unwanted part interference
(Figure 3.9). The results were used by the designers to specify assembly KPCs which avoid
the identified interference scenarios. The identified regions of interference were not
anticipated by the designers, despite their experience with designs of this type and
thorough analysis of the concept design embodiment. The specified KPCs were used in
subsequent tolerance analysis of the assembly.
79
Variation in spigot tube outer diameter Variation in spigot tube vertical channel Variation in the angular arc length of
and inner diameter of interacting collar width and mating collar notches can radial collar ring (blue feature) can
can result in an interference that result in interference preventing result in an interference with functional
prevents the rotation of the collar translation of the collar along the surfaces of engaging slider (rectangular
around spigot. spigot. part) compromising required
functionality.
Suggested KPC is an enforced diametral Suggested KPC is an enforced clearance Suggested KPC is a measure of collar
clearance measurement between spigot measurement between spigot tube ring arc length.
tube outside diameter and collar inner vertical channel and collar notches.
diameter.
Excessive horizontal width of vertical Variation in inner and outer diameters An excessive depth of collar (top part
ramp face on gear (upper part) can of interacting collars can result in shown in green) can result in contact of
cause contact with vertical ramp faces interference preventing rotation. horizontal bottom surface of collar with
of engaging slider (lower part). This base preventing functional ramp faces
undesirably reduces the possible from engaging.
vertical travel of engaging ramps.
Suggested KPC is measurement of upper Suggested KPC is an enforced diametral Suggested KPC is an enforced clearance
ramp width. clearance measurement between inner measurement between bottom of collar
and outer collars. and top of base.
Figure 3.9 ‐ Assembly regions identified as being prone to unwanted part interference.
KPCs were defined to avoid the identified interference scenarios.
80
3.3.3 Discussion of results
A PIDO tool based visualization method has been developed to aid designers in identifying
assembly KPCs at the concept embodiment design stage. The method integrates the
functionality of commercial CAD software with the process integration, UQ, data logging and
statistical analysis capabilities of PIDO tools, to simulate manufacturing variation effects on
the part parameters of an assembly and visualise assembly clearances, contacts or
interferences. KPCs are identified by design analysis and refinement based on sensitivity
analysis. The approach has a number of benefits:
Visualization is carried out using native CAD models, which are often readily available at
the concept embodiment design stage, requiring low additional modelling effort
requirements.
Utilization of embedded measurement and interference analysis capabilities in CAD
assembly environments offering rapid implementation
Visualizing variation within the assembly may aid the designer to specify critical
assembly dimensions as KPCs for monitoring, as well as adjust nominal dimensions of
part and assembly features to provide space for expected variation and maintain
correct functionality.
Addressing the robustness of concept design embodiments to manufacturing variation
from the early concept design stages with low additional modelling effort. This can
reduce the undesirable cost of managing poor quality later in the design cycle following
the commissioning of manufacturing.
The benefit of the proposed method has been validated in an industrial case study by
enabling the automated identification of unintended component interactions in a concept
embodiment design of an automotive actuator assembly. These interactions had not been
anticipated by the industry partner, despite their experience with designs of this type. The
increased knowledge enabled the industry partner to establish a series of KPCs that were
used to monitor these unintended part interactions early in the conceptual embodiment
phase while the associated design flexibility is high and accrued cost commitments are low.
The method has been demonstrated to enhance the design process by offering rapid
implementation, low analysis cost as well as accurate and reliable outcomes.
81
3.4 Computationally efficient manufacturing sensitivity analysis for assemblies with
linear‐compliant elements
Addressing the effects of manufacturing variation during conceptual and embodiment
design can reduce the costs of managing poor quality later in the manufacturing stage when
the ability to enact change is limited (Bergman et al. 2009; Ebro et al. 2012) (Section 3.2.2).
In particular, design analysis and refinement based on tolerance analysis of concept design
embodiments can provide insight into the sensitivity of alternative concepts to
manufacturing variation, and facilitate concept selection with quantitative measures of
robustness. This early increase of design knowledge allows the designer to make informed
decisions addressing the effects of variation, while there is sufficient design flexibility to act
without much expense (Figure 3.2 ii).
Estimating the effects of variation on assembly functionality is typically achieved with
tolerance analysis based on a computational assembly tolerance model (Chase 1988). If the
functionality of the assembly is subject to applied loads, estimating the effects of variation
typically requires computationally expensive Finite Element (FE) simulations to model
physical effects such as compliance or stress (Section 2.9). As tolerance analysis requires
multiple evaluations of the assembly tolerance model (Section 2.5), the overall
computational costs are compounded. With an efficient tolerance model, the computational
complexity and expense of quantifying the effects of variation can be reduced allowing a
greater number of simulations and analyses in a given time budget. This can increase
knowledge early in the design process where analysis time budgets for individual concepts
are limited (Section 3.2).
This section presents a method of reducing the computational cost of analysing the effects
of variation with an efficient assembly tolerance model, in linear‐compliant assemblies
under loading. This method:
Significantly reduces simulation costs by representing linear‐compliant elements with a
set of equivalent constant rate springs.
Utilises PIDO tools to allow reuse of CAD models created in the conceptual and
embodiment design stages, thereby reducing the cost of creating the assembly
tolerance model.
This method is applied to the benchmarking study of alternative automotive seat rail
assembly concept embodiments to quantify their sensitivity to manufacturing variation. The
additional knowledge gathered as part of the benchmarking study enables designers to
82
proceed into the detail design stage with higher certainty of performance with low
additional analysis expense.
3.4.1 Manufacturing sensitivity analysis of automotive seat rail assemblies
Automotive seating structures are subject to a constrained set of comfort and safety
demands requiring the accommodation of anthropometric variation of users while meeting
safety standards under crash scenarios (Leary et al. 2010). Seat position adjustment in
multiple degrees‐of‐freedom (DoF) facilitates the location of the user within the vehicle
cabin in a comfortable and functional seating position. An essential DoF required by all
seating structure designs is the fore and aft movement of the seat. As automotive seating
structures have evolved over an extended development period, there has been a
convergence of practical embodiments. Accordingly, fore and aft movement is typically
achieved using a rolling rail assembly consisting of two interlocking rail sections. Due to the
stochastic nature of manufacturing processes, rail assembly performance is affected by
manufacturing variation. For low cost markets, latitude in manufacturing variation is
desirable. For mature markets, predictable and repeatable functional efforts take priority.
Such rail assemblies have historically been designed solely with strength and material use in
mind. Consideration of customer perceived quality (such as variation in functional effort)
was not prioritized at the conceptual design stage, as it was typically considered as too
difficult, and time consuming. However, assessing customer perceived quality later in the
design cycle following the commissioning of manufacturing when design flexibility is low,
incurs significant cost penalties if the selected concept is sensitive to manufacturing
variation. The quality of the customer experience has then to be managed at the
manufacturing stage for the life of the product with costly counter measures.
Accommodating the effects of manufacturing variation early in the development cycle is
important for achieving competitive quality, cost and development time objectives for a
range of target markets (Phadke 1989; Park 1996; Wu et al. 2000).
This work presents a benchmarking study of alternative conceptual embodiments of
automotive seat rail assemblies according to their sensitivity to manufacturing variation i.e.
robustness (Figure 3.10 and Table 3.2). All rail assemblies consist of two interlocked steel
rail sections (with symmetric or asymmetric profiles) separated by rolling elements
(spherical and cylindrical). The upper and lower rail sections are elastically preloaded by an
interference fit with the rolling elements. Variation in the geometric parameters of the rail
section affects the magnitude of the elastic rail preload and consequently the rolling effort
of the rail assembly.
83
Rolling effort is of significant importance to customer perceptions of product quality, and
must be:
sufficiently high to avoid chatter in the rail assembly
sufficiently low to allow the rails to move without excessive effort
Due to these conflicting requirements, rolling effort is highly sensitive to manufacturing
variation which results in both large scale batch‐to‐batch variation between assembly
production batches, as well as piece‐to‐piece variation within single assemblies. Large scale
batch‐to‐batch variation can be accommodated using alternative rolling element diameter
increments between batches. Piece‐to‐piece variation however, is more difficult to
accommodate as it requires alternative rolling element diameters for each assembly. The
aim of this work is to benchmark alternative rail assembly profiles according to their
sensitivity to rolling effort variation in the presence of piece‐to‐piece manufacturing
variation.
Rolling effort is defined by rolling element clearance, rolling element contact force and the
associated coefficient of rolling resistance (Williams 1994). As such, the three benchmarking
parameters considered in this study are:
variation in the coefficient of rolling resistance
variation in rolling element contact force
variation in rolling element clearance
These are discussed in the following sections.
84
(i) (ii)
(iii) Rail assembly A (iv) Rail assembly B
(v) Rail assembly C (vi) Rail assembly D
(vii) Rail assembly E
Figure 3.10 ‐ (i) Automotive seat (ii) seat rail assembly (iii) ‐ (vii) Alternative rail assembly section views.
85
Table 3.2 ‐ Rail assembly designs considered in benchmarking analysis.
Rail assembly design Style Balls Rollers
2 (upper)
A Symmetric nil
2 (lower)
1 (upper) 1 (lateral)
B Asymmetric
1 (lower)
2 (upper)
C Symmetric nil
2 (lower)
2 (upper)
D Symmetric nil
2 (lower)
E Symmetric 2 (upper) 2 (lower)
3.4.2 Variation in coefficient of rolling resistance
Variation in the coefficient of rolling resistance is characterised by material properties and
localised non‐elastic effects associated with deformation, surface adhesion, and micro‐
sliding of the contact surfaces (Williams 1994). The variation in these effects is expected to
be similar for alternative rail section profiles due to the use of similar materials,
manufacturing processes and surface finish. As such, the variation in coefficient of rolling
resistance is negligible and will not be considered in this work.
3.4.3 Variation in rolling element contact force
Due to the complicated geometry of the rail assembly, estimating the rail contact force with
a virtual model requires a finite element (FE) simulation of the interference fit of the rail
sections with the rolling elements. Quantifying the variation in contact force, which results
from the manufacturing distributions associated with geometric rail section parameters,
requires a statistical tolerance analysis approach based on uncertainty quantification (UQ)
methods (Section 2.5). UQ methods typically require a large number of model evaluations to
provide sufficiently accurate distribution estimates (Gao et al. 1995). Consequently,
estimating the variation in rolling element contact force with FE models and traditional
statistical tolerance analysis imposes significant computational costs. Furthermore, there
are limited tools available for conducting statistical tolerance analysis of assemblies subject
to loading (Section 2.9). The difficulties with analysis tool limitations and high computational
cost, in such tolerance analysis problems of assemblies under loading, are addressed in
Chapters 4 and 5 of this dissertation. In this chapter however, an alternative approach is
developed which avoids these limitations by taking advantage of:
Linear‐elastic behaviour of the folded sheet metal components of the automotive seat
rail assembly
Incompressibility of rolling elements (relative to sheet metal components)
86
The objective of this work is to benchmark the sensitivity of alterative designs to
manufacturing variation, in particular the variation in rail rolling effort. Consequently, it is
not necessary to estimate the absolute variation in performance of each design, but only
the performance compared to other designs.
Due to the above conditions of rail linear‐elasticity and incompressible rolling elements, it is
hypothesised that the comparative variation in rolling element contact force between can
be assessed by comparing only the stiffness of the rail assembly. As such the assembly
tolerance model can be made significantly more efficient.
3.4.3.1 Linear‐compliant rail representation
Figure 3.11 depicts the derivation of an efficient contact force variation model for
symmetric and asymmetric rail assemblies. Initially the rail assemblies are conceptually
simplified to their functional representation, clarifying the underlying loading relationship
between the rail sections and rolling elements (Figure 3.11 (1)).
Figure 3.11 ‐ Linear compliant rail simplification.
87
The rails are deflected upon assembly by rolling elements which are slightly oversized; this
deflection is elastic, as validated with a computational mode in Section 3.4.3.2. Due to the
elastic loading condition, the relationship between the net contact force applied to the rail
section and the associated deflection is linear. The relationship between contact force and
deflection is defined by the stiffness, i.e. the rate of change in rolling element preload due
to a change in rail section deflection:
Where, (3.1)
: Contact force
: Rail assembly stiffness
: Deflection
The overall stiffness of the rail assembly ( ) can be determined by measuring the contact
force for a known deflection (Figure 3.11 (4)). A known deflection can be imposed by
oversizing the rolling elements with a diametral interference amount ∆ , i.e.
1 1 Where, (3.2)
∆
, : Deflection of upper and lower rail, respectively
, : Stiffness of upper and lower rail, respectively
: Rolling element diameter
∆ : Rolling element diametral interference
Rearranging (2.2), an expression for the overall rail stiffness is obtained:
∆ (3.3)
Variation in rolling element contact force can therefore be quantified by the stiffness of the
rail section assembly. The stiffness can then be used a highly efficient assembly tolerance
model of the sensitivity of the rail rolling effort to manufacturing variation. A rail profile
which displays a low stiffness is desirable as it accommodates part‐to‐part variation with
little change in rolling effort. To quantify rail assembly stiffness, a Finite Element (FE)
simulation was conducted for various rolling element increment sizes.
3.4.3.2 FE contact force model
A parametric Finite Element (FE) model of each rail assembly was constructed to simulate
the rail deflection due to an interference fit with the rolling element (Figure 3.12 and Figure
3.13). For symmetric rails the model was constructed to consider one‐half of the symmetric
rail profile. For asymmetrical profiles the entire rail assembly was considered. An oversized
rolling element was initially inserted into the assembly resulting in an interference fit. The
associated interference caused an initial state of imbalance in contact force between the rail
88
sections and rolling elements. A simulation was subsequently initiated in which the rail
sections deflected under the contact force imbalance until contact force equilibrium was
reached. The resultant equilibrating contact force was integrated over the contact surfaces
and recorded. The associated rail deflection corresponded to the amount of initial rolling
element interference.
The simulation was carried out for three progressively increasing values of rolling element
interference, corresponding to increases of 0.1mm in the diameter of the rolling elements.
The three resultant values were used to determine the rail assembly stiffness (Section
3.4.7). The average individual simulation time was approximately 400 seconds on a single‐
core 3 GHz CPU.
The FE model was also used to validate the linear elasticity assumption of rail compliance as
mentioned in section 3.4.3.1. A worst‐case condition was simulated consisting of oversizing
the rolling element by 1mm and noting the peak stresses in the rail sections. The peak
stresses were in the order of 20MPa which was well below the elastic limit of the steel
material, validating the linear‐compliant rail representation.
: Contact constraint FE Model Details
: Boundary constraint
Linear quadrilateral plane stress elements
Upper Element
(CPS4I)
Rolling element type:
Xrotation = 0, Y rotation = 0, Zdisplacement = 0
Upper Rail
Number of
15500 (average)
Lower elements:
Rolling element
Contact
Surface to surface contact with limited sliding
Lower constraints:
Rolling element
Lower Rail
Boundary Horizontal translation only
constraints:
Figure 3.12 ‐ FE model details (rail assembly B). All dimensions in mm.
89
Figure 3.13 ‐ Rail deflection due to interference fit of rolling element (rail assembly B).
Contact area shown in detail.
3.4.4 Variation in rolling element clearance
Variation in rolling element clearance may be assessed by tolerance analysis which
accommodates expected manufacturing process capabilities. It is highly desirable that the
maximum variation between the upper specification limit (USL) and lower specification limit
(LSL) sizes of the rolling elements be small, as it accommodates part‐to‐part variation within
a rail assembly with a small change in rolling effort.
Statistical tolerance analysis was conducted by integrating parametric CAD models of the
alternative rail section assemblies with PIDO tool capabilities (Sections 3.2.1 and 2.8). The
integration was achieved according to a PIDO tool based tolerance analysis platform (Figure
3.14). This platform is developed in Chapter 4 of this dissertation (Section 4.3). The solution
was rapidly implemented, as it allowed the reuse of parametric CAD models constructed in
the conceptual and embodiment design, rather than requiring additional models. Utilizing
existing CAD data for tolerance modelling significantly reduces modelling expense and the
need for additional expertise and tools.
90
Figure 3.14 ‐ PIDO workflow associated with seat rail benchmarking study.
Estimates of expected variation in the rail section geometries were applied to quantify the
expected manufacturing distribution for each parameter. Due to a lack of production
process capability data, conservative estimates of expected manufacturing variation in rail
sections were provided by the industry partner (Table 3.3). As the focus of the study was to
compare sensitivity to manufacturing variation solely based on differences in the geometric
configuration of the rail profile, all benchmarked rails were subject to the same
manufacturing variation. Variation in linear dimensions and material thickness were not
considered.
Table 3.3 ‐ Rail section parameter variation specified
by industry partner and used in statistical tolerance analysis
Parameter Specification limits +/‐ Cpm σ
Bend radii 0.1 mm 1 0.033
Bend angle 1° 1 0.333
Parametric models of rail sections were subjected to parameter variation identified in Table
3.3. Each rail assembly was subjected to a statistical tolerance analysis based on a Monte
Carlo (MC) simulation of 1000 samples. Studies suggest that 1000 samples provide sufficient
accuracy in an assembly tolerance analysis problem (Gao et al. 1995). Based on the applied
variation, upper and lower specification limits for rolling element diameters were identified
in order to achieve a 1.
91
3.4.5 Assumptions
The conducted analysis was subject to a number of assumptions in order to allow for
reasonable scope of analysis within limited analysis time:
Due to a lack of production process capability data, conservative estimates of expected
manufacturing variation in rail sections parameters were applied (no variation in linear
dimensions or material thickness was applied).
Variation in symmetric rail sections was assumed to be equal on either side of the axis of
symmetry.
The plane stress FE model only considers the two‐dimensional cross‐section of the rail at
the rolling element contact location. This simplification results in the estimated
magnitude of contact force being based on deformation of the full rail length, rather
than point contact. However in reality, the contact scenario is that of a sphere and a
surface. The approximation was used to limit the FE model simulation time to a
practically manageable size, due to the significant complexity of the more realistic
scenario.
Rail stiffness was assessed at three points. Stiffness shows a linear trend for moderate
rail displacement.
Rail stiffness was assessed for spherical (ball) elements only, not cylindrical elements.
3.4.6 Results
3.4.6.1 Rail assembly A
Rail assembly A is a symmetric design with cylindrical rolling elements (Figure 3.10 (iii)).
Contact force was calculated for the scenarios of Table 3.4 and summarized (Section 3.4.7).
Table 3.4 ‐ Ball dimensions used for contact force simulation in rail assembly A
Scenario Ball interference (mm) Upper Ball diameter (mm)
Simulation 1 0.1 6.044
Simulation 2 0.2 6.144
Simulation 3 0.3 6.244
A statistical tolerance analysis was conducted for rail assembly A in order to identify the
expected clearances at the rolling element locations. A parametric CAD model of the rail
profiles was used for the analysis. The separation distance between the vertical extremities
of the upper and lower rail was held constant while the rail section parameters were
subjected to a MC simulation (Section 3.4.4). The resultant distributions of rail clearances
are shown in Figure 3.15 and the identified specification limits in Figure 3.16. The results are
summarized in Section 3.4.7.
92
(i) (ii)
Figure 3.15 ‐ Rail assembly A rolling element clearance distribution. (i) Upper ball (ii) Lower ball.
Spec.
6.543 mm 5.457 mm 6.319 mm 5.481 mm
Limits
Figure 3.16 ‐ Rail assembly A rolling element specification limits. Shaded profile corresponds to nominal rail
dimensions. Upper Ball (UB), Lower Ball (LB).
3.4.6.2 Rail assembly B
Rail assembly B is an asymmetric design incorporating spherical and cylindrical rolling
elements (Figure 3.10 (iv)). As the rail assembly has no axis of symmetry, the entire rail
profile was considered in the contact force and rolling element clearance analyses. Contact
force was calculated for the scenarios of Table 3.5 and summarized in Section 3.4.7.
Distributions of rail clearances are shown in Figure 3.17 and the identified specification
limits in Figure 3.18. The results are summarized in Section 3.4.7.
93
Table 3.5 ‐ Ball dimensions used for contact force simulation in rail assembly B
Scenario Ball interference (mm) Upper Ball diameter (mm)
Simulation 1 0.1 7.54
Simulation 2 0.2 7.64
Simulation 3 0.3 7.74
(i) (ii) (iii)
Figure 3.17 ‐ Rail assembly B rolling element clearance distributions.
(i) Left roller (ii) Bottom roller (iii) Right ball.
Spec.
3.436mm 2.534mm 5.140mm 4.820mm 7.855mm 7.025mm
limits
Figure 3.18 ‐ Rail assembly B rolling element specification limits. Shaded profile corresponds to nominal rail
dimensions. Left Roller (LR), Bottom Roller (BR), Right Ball (RB).
3.4.6.3 Rail assemblies C, D and E
Contact force and rolling element clearance analyses for rail assemblies C, D and E were
carried as per the procedure demonstrated in sections 3.4.6.1 and 3.4.6.2. The results are
summarized in Section 3.4.7.
94
3.4.7 Benchmarking of designs
The results of the contact analysis conducted for the analysed rails are shown in Figure 3.19.
Variation in rolling element contact force is quantified by the stiffness of the rail assembly
(Section 3.4.3.1). The stiffness of the rail sections is an indicator of the performance
robustness of the nominal rail design. A rail profile showing low variation in contact force
with displacement (small gradient) is desirable as it accommodates part‐to‐part variation
with a small change in rolling effort.
Although some rail assemblies show a significantly high and undesirable stiffness (such as
rail D) they offer other performance advantages warranting their inclusion within the
concept design set, for instance:
material use efficiency
ease of pressing
ease of metrological assessment facilitating simple process control measures
For such designs, although they show poor performance robustness in rolling effort (as is
the focus of this benchmarking study) their use may be appropriate for low cost applications
where consumer quality expectations are lower.
Figure 3.19 ‐ Rolling element contact force versus local rail displacement. Gradient indicates stiffness.
95
A summary of analysis results for variation in rolling element clearance is shown in Figure
3.20 for the analysed rail assemblies. A rail assembly with minimum variation between the
upper and lower specification limits is preferable as this reduces variation in rolling effort.
Figure 3.20 ‐ Magnitude of variation in nominal rolling element clearance versus rail assembly design.
The analysed rail assemblies can be benchmarked according to robustness of contact force
and rolling element clearance. To provide a global performance ranking it is necessary to
assign a weighting to both benchmarking criteria. The appropriate weighting is dependent
on the designer’s preference to either emphasise individual criteria or pursue a balance.
Due to the linear‐elastic characteristic of the rail assembly, variation in rail stiffness and
rolling element clearance each influence the rail contact force and subsequently the rolling
effort. However, during assembly it is possible to mitigate the effect of clearance variation
by allowing a number of ball sizes to be available to assemble with rails as required.
Consequently it may be appropriate to assign a greater importance to rail stiffness than to
clearance variation.
Table 3.6 shows a ranking of robustness to contact force and rolling element clearance for
each of the assessed rail assemblies. These rail assemblies are then ranked globally using a
square weighted sum of performance in both analysed categories. The square weighted
ranking gives each metric equal importance, but discourages large difference between both
performance criteria.
96
Table 3.6 ‐ Performance ranking of conceptual rail assembly designs.
A B C D E
Rail assembly
Robustness of
4 1 2 5 3
contact force
Robustness of
rolling element 5 4 2 3 1
clearance
Overall
5 3 1 4 2
performance rank
Observing the results of the ranking it is possible to note the following characteristics:
Rail section designs with reduced spacing between upper and lower rolling elements
show higher stiffness (for example A and D). This is due to the closer proximity of
opposing contact forces on the upper rail resulting in a shorter effective lever arm
associated with rail deflection.
Rail sections with a higher number of folds leading to the rolling element location show
greater variation in rolling element clearance (for example A and D). This is due to the
accumulation of variation during each fold.
Rail sections with a large distance between a fold and a rolling element show high
variation in rolling element clearance (for example the right hand side of lower rail B).
This high variation is due to the lever arm amplifying variation associated with the fold
angle.
3.4.8 Discussion of results
Estimating the sensitivity to manufacturing variation from the early conceptual and
embodiment design stages can aid in the management of quality when design flexibility is
high. To accurately estimate the effects of variation on the functionality of a design,
statistical tolerance analysis based on a computational assembly tolerance model is
required. However, the computational cost of statistical tolerance analysis can be
prohibitively high, especially when many iterations of FE simulations are necessary to model
physical effects such as compliance. Additionally, the ability to carry out tolerance analysis
in such scenarios may be limited by the available tools and expertise in formulating the
model and interpreting results.
97
This section provides an efficient method of analysing the effects of manufacturing variation
in linear‐compliant assemblies under loading. This design analysis and refinement method
significantly reduces computational cost by utilising:
linear‐compliant assembly stiffness measures, and
PIDO tool statistical tolerance analysis based on the reuse of CAD models created in the
conceptual and design embodiment stage.
This increase in computational efficiency, allows an estimate of sensitivity to manufacturing
variation to be made earlier in the design process with reduced effort.
This approach is validated in a benchmarking study of alternative automotive seat rail
assembly concept embodiments to quantify their sensitivity to manufacturing variation. The
benchmarking study identified significant differences in sensitivity to manufacturing
variation between alternative designs. This outcome assists automotive manufacturers to
increase design knowledge early in the design process and proceed into the detail design
stage with higher certainty of performance with low additional analysis expense.
The method applied here can be generalised and applied to assess the sensitivity to
manufacturing variation in other linear‐compliant assemblies whose functionality is
dependent on applied loads. Due to high efficiency, application of the method at the
conceptual and design embodiment stages is particularly useful for increasing knowledge
early in the design process where analysis time budgets for individual concepts can be
limited.
Chapter 5 of this dissertation considers the effect of variation on automotive set rail
assemblies further by conducting a tolerance synthesis with FE model simulations to identify
optimal production tolerances.
98
3.5 Refinement of concept design embodiments through PIDO based DOE analysis
and optimization
The conceptual design and embodiment stages focus on the search for concept variants
which offer feasible solutions to the design problem under consideration (Section 3.2). As
these stages are early in the project timeline where design knowledge is limited, they can be
associated with a vast design space in which feasible regions and optimum performance can
be difficult to identify (Askin et al. 1988; Thompson et al. 1999; Krishnan et al. 2001;
Simpson et al. 2008; Tomiyama et al. 2009).
The vast design space may be studied with design and refinement techniques such as design
of experiments (DOE) analysis and optimization methods. DOE analysis offers a method to
explore the design space by systematically evaluating design parameter combinations in the
associated parameter space. Optimization allows for an automated exploration of the
parameter space to identify regions of local and global optimum performance, in the
presence of complex constraints and competing design objectives.
Design analysis and refinement at the concept stage using DOE analysis and optimization
methods provides insight into the design problem while the associated flexibility remains
relatively high (Figure 3.2). To minimize overall design time, design analysis and refinement
at the conceptual stage with DOE and optimization methods requires rapid implementation
and low analysis cost. Implementation is dependent on the integration of DOE and
optimization methods with disparate CAE modelling tools which may be required as part of
conceptual and embodiment design (such as CAD packages or FE simulation codes). Efficient
integration of disparate CAE models for DOE analysis and optimization can be facilitated
with the emerging capabilities of PIDO tools (Section 2.8).
This section presents an approach for investigating the design space in the conceptual and
embodiment design stages with DOE analysis and optimization methods. The conceptual
design of automotive seat kinematics is presented as a highlighting case study.
The capabilities of PIDO tools are utilised to allow CAE tool integration, and efficient reuse
of models created in the conceptual and embodiment design stages, to rapidly identify
feasible and optimal regions in the design space.
3.5.1 Automotive seat kinematics
The automotive seat structure is required to accommodate the anthropometric variation of
users while meeting minimum safety standards under low and high speed crash scenarios.
99
Seating structures are subject to a stringent set of constraints and objectives, including:
allowable envelope of motion, structural integrity, modularity and cost. Within these
objectives and constraints, practical embodiments of automotive seating structures have
converged to the common solution of a planar kinematic chain. This kinematic chain may
exist in several variants, each with their own unique performance attributes (Leary et al.
2010). Of the observed kinematic chains, the typical implementation is a four‐bar linkage3;
“the simplest possible pin‐jointed mechanism for single degree of freedom controlled
motion” (Norton 2003).
The behaviour of the four‐bar linkage is well understood based on the Grashof condition
(Figure 3.21) (Barker 1985), which classifies kinematic behaviour based on the length of the
shortest, , longest, , and other links, and . Depending on which of these links is
shortest (Table 3.7), the grounded links will be will behave as a crank (rotary motion), or a
rocker (reciprocating motion). Of these permutations, the parallelogram and crank rocker
are often observed in automotive seating structures. The latter is typical as it results in
induced tilting as the seat lifts – desirable to accommodate different user body dimensions.
For example, shorter users typically require a higher seating position and greater seat tilt
and vice versa (Leary et al. 2010).
Although four‐bar linkage kinematics are well understood, a large design space, combined
with multiple constraints and objectives, impose significant challenges to their design and
optimization in automotive seat structures.
3
Other embodiments include five‐bar linkages and independent rocker actuation. The four‐bar linkage may
include an independent four‐bar linkage or crank‐slider to enable seat tilting [1].
100
Table 3.7 ‐ Classification of four‐bar mechanisms.
Type Grashof condition Shortest link Behaviour
1 frame double‐crank
2 side crank‐rocker
3 coupler double‐rocker
4 any double‐rocker
double‐crank
5 any
(parallelogram)
Figure 3.21 ‐ Four‐bar linkage and associated nomenclature (Leary et al. 2011).
3.5.2 PIDO based DOE analysis and optimization of the conceptual design of
automotive seat kinematics
Despite the apparent simplicity of a four‐bar linkage system (Barker 1985) applied in
automotive seating structures, the number of input parameters (link lengths and frame
position) and associated permutations results in a large possible design space. Furthermore,
multiple constraints and objectives hinder systematic optimization efforts. Historically, such
problems are solved by inspection, or with graphical (Bastow 1976), or numeric (Norton
2003) aids. This work illustrates a method for resolving these conflicting design
requirements at the conceptual design stage by mapping the feasible design space and
identifying regions of high performance.
An algebraic model of four‐bar linkage kinematics was interfaced with a Process Integration
and Design Optimization (PIDO) tool (Figure 3.22). PIDO software has the capability to
facilitate automated analysis by the integration of stand‐alone CAE tools to enable
automated parametric analysis, DOE, multi‐objective optimization, and statistical evaluation
of system models (Sections 3.2.1. and 2.8). The design refinement and analysis approach
consisted of two phases: initial analysis and design refinement.
An initial analysis was developed to explore the design space using a DOE approach. Due to
the analytical nature of the kinematic model, the computational cost of each evaluation was
sufficiently low to allow a high resolution experiment, providing the designer with a rapid
101
overview of the feasible regions of the design space, and aiding in the identification of
regions of high performance. DOE was carried out using full factorial sampling.
The second phase of design refinement and analysis utilized multi‐objective optimization to
search the identified feasible design space for optimum designs. A Genetic Algorithm (GA)
optimization algorithm was used to search the design space. Stochastic optimization
algorithms such as GA are suited to a large search space due to inherent robustness against
objective function discontinuity and local optima stagnation. In contrast, deterministic
algorithms are more efficient at converging towards an optimum, but are typically only
robust within localised regions of the search space and can stagnate at local optima (Section
2.6.2) (Deb et al. 2002). For models which exhibit a high computation cost, it may be
beneficial to refine the optimization phase with deterministic algorithms prior to stochastic
optimization. Due to the low computational cost of the kinematic model used in this work,
the direct application of a stochastic GA yielded acceptable results within a reasonable
computation time.
Figure 3.22 ‐ PIDO workflow associated with the second phase of design refinement and optimization of
automotive seat kinematic concept designs.
The initial high resolution DOE analysis parameter space was populated using Full Factorial
sampling of the model input parameters with an experiment size of 12,888 model
evaluations. The resultant design space was used to identify regions of infeasible
performance. The subsequent multi‐objective GA optimization analysis was initialised with
the identified feasible design space. The optimization problem consisted of four competing
objectives, and additional constraints. The GA was limited to 10 generations, resulting in a
102
simulation size of 122,880 model evaluations. The total solution time was 3.5 days on a
single‐core 3GHz CPU.
Table 3.8 ‐ Model input parameters.
Input parameter Dimension
Input link length mm
Follower link length mm
Height of ground link and frame node mm
Input link inclination degrees
Follower link inclination degrees
Included angle of follower rotation degrees
Table 3.9 ‐ Model output parameters, dimension and objective.
Output parameter Dimension Objective
Vertical travel (at reference point) mm Minimise
Horizontal travel (at reference point) mm Minimise
Lift effort N Minimise
Number of manual actuations to
‐ Minimise
achieve full lift
3.5.2.1 Results
The simulation identified 20,204 feasible designs with 304 designs being Pareto‐optimal
(Figure 3.23). Without relative importance weighting of the competing design objectives
there is no clearly optimal solution as the set of Pareto‐optimal designs offer equal
performance in satisfying the design objectives; an improvement in one objective leads to a
compromise in others. Designs of interest are identified with associated identification
numbers in Figure 3.23:
Minimum peak force: #105908
Minimum number of manual actuations to lift seat: #53684, #122063 and #105908
Minimum horizontal travel: #50560
The other identified designs of Figure 3.23 are balanced compromise between competing
objectives.
103
Selected design
Figure 3.23 ‐ Four‐dimensional chart indicating performance of Pareto‐optimal solutions in the conceptual
design of automotive seat kinematics. Designs referred to in the discussion have been labelled with associated
identification numbers.
The industry partner selected design #53684 as offering the most desirable balance of
design objectives due to large performance benefits associated with manual actuation
efforts. The selected concept design was progressed to the detail design stage and
subsequently manufactured as part of a commercial seat assembly. The resultant
commercial product was benchmarked against other competing products (Table 3.10).
Table 3.10 ‐ Benchmarking results against other competing products
Performance
Competing seat
assembly and Vehicle use Vertical travel Number of actuations to
supplier (target 60 mm) achieve full lift
[mm] (lower is better)
Competitor A Mazda (Model 2, Model 3) 60 33
Competitor B Toyota (Yaris, Prius) 60 27
Brose Volvo C30 60 21
Tachi‐S Honda (Accord, Civic) 60 21
JCI Ford (Focus, Fiesta) 60 20
Sitech Volkswagen (Golf, Jetta, Beetle) 60 19
Design #53684 Tesla Model S 60 16
104
The selected design was found to offer the best performance in achieving the vertical seat
travel objective with the least number of manual actuations (these are actuations required
to lift the seat for a given fixed lift effort). This superior performance against competitors in
seat actuation demands was a determining factor for the selection of the design in the seat
assembly of the Tesla Motors Model S full‐sized electric sedan currently on sale in the
United States (Elders 2011).
3.5.3 Other applications
PIDO based DOE analysis and optimization is a broadly applicable approach for exploring the
design space in the conceptual and embodiment design stages. For instance, PIDO based
DOE analysis has been used by the author for preliminary investigation of the design space
associated with concept wheelchair design (Burton et al. 2010; Leary et al. 2012).
105
(ii)
(i) (iii)
Figure 3.24 ‐ (i) Quasi‐static model of users arm and wheelchair wheel interaction (Leary et al. 2012)
Shoulder (ii) and elbow (iii) torques corresponding to range of feasible seat height and fore‐aft positions.
Push progress indicates rotation of the wheel from hand contact until release.
3.5.4 Discussion of results
The conceptual and embodiment stages of the design process can be associated with a vast
design space in which regions of desirable performance are difficult to identify. Design
analysis and refinement techniques, such as DOE analysis and optimization, allow
exploration of the design space and identification of optimum regions in the presence of
complex constraints and competing objectives. The resulting increase of understanding of
the design space early in the design process, allows for design improvements to be made
when overall project cost commitments are low and design flexibility is high.
This section highlights the benefits of searching the conceptual and embodiment design
space through DOE analysis and optimization, with a practical case study. The case study
addresses the conceptual design of automotive seat kinematics consisting of a four‐bar
linkage system. Despite the apparent simplicity of the planar four‐bar linkage, the large
number of input parameters and possible permutations results in a large design space. The
capabilities of PIDO tools were utilised to allow CAE tool integration, and efficient reuse of
106
models created in the conceptual and embodiment design stages, to rapidly identify feasible
and optimal regions in the design space. One of the identified Pareto‐optimal concepts was
selected for detail design and manufacture. Benchmarking has shown the selected design to
offer superior performance against commercial competitors in achieving vertical seat travel
objectives with the least number of manual seat actuations. The design was subsequently
commercialised in the seat assembly of the Tesla Motors Model S full‐sized electric sedan.
The PIDO based DOE analysis and optimization approach to exploring the design space in
the conceptual and embodiment design stages can be broadly applied. An example is
provided focusing on the conceptual design of wheelchairs.
107
3.6 Summary of research outcomes
This chapter identified opportunities to enhance the conceptual and embodiment stages of
design by increasing design knowledge with design analysis and refinement techniques.
Novel methods for the use of PIDO tools were developed for the analysis and refinement of
concept design embodiments with sensitivity analysis, tolerance analysis, DOE methods and
optimization. These methods include:
1. A PIDO tool based visualization method to aid designers in identifying assembly KPCs at
the concept embodiment design stage.
The method integrates the functionality of commercial CAD software with the process
integration, UQ, data logging and statistical analysis capabilities of PIDO tools, to simulate
manufacturing variation effects on the part parameters of an assembly and visualise
assembly clearances, contacts or interferences.
Visualizing variation within the assembly may aid the designer to specify critical
assembly dimensions as KPCs for monitoring. The nominal dimensions of part and
assembly features may then be adjusted to provide clearance for expected variation to
maintain correct functionality.
Visualization is carried out using native CAD models, which are often available at the
concept embodiment design stage, requiring low additional modelling effort.
Utilization of embedded measurement and interference analysis capabilities in CAD
assembly environments offers rapid implementation.
The benefit of the proposed method has been validated in an industrial case study by
enabling the automated identification of unintended component interactions, in the
concept design embodiment of an automotive actuator assembly. These interactions,
which had not been anticipated by the industry partner despite their experience with
designs of this type, resulted in the specification of assembly KPCs that would otherwise
have been overlooked.
An efficient method of analysing the effects of manufacturing variation in linear‐compliant
assemblies under loading is presented. The method significantly reduces computational
costs by utilising linear‐compliant assembly stiffness measures, reuse of CAD models created
in the conceptual and design embodiment stage, and PIDO tool based statistical tolerance
analysis. This approach is developed as part of a benchmarking study of alternative
108
automotive seat rail assembly concept embodiments to quantify their sensitivity to
manufacturing variation.
Estimating functionality of the rail assembly requires a FE simulation of the contact force
between rail sections and rolling elements. Estimating the variation in functionality with
FE models and traditional statistical tolerance analysis imposes significant computational
costs, as a large number of FE model evaluations are required to provide sufficient
accuracy.
In this section an alternative approach is developed which increases computational
efficiency by taking advantage of the linear‐elastic behaviour of the folded sheet metal
seat rail assembly, and the relative incompressibility of rolling elements.
Due to the linear‐elastic condition, a measure of assembly stiffness can be used to
estimate sensitivity to manufacturing variation. Estimating the stiffness requires only 3
evaluations of the FE model, significantly reducing overall computational expense.
Due to the high associated efficiency, the method may be applied at the conceptual and
design embodiment stages; thereby increasing knowledge early in the design process
where analysis time budgets for individual concepts are limited.
The benchmarking study identified significant differences in sensitivity to manufacturing
variation between alternative designs. This outcome allowed the industry partner to
proceed into the detail design stage with higher certainty of performance and with low
additional analysis expense.
The method applied here can be generalised and applied to assess the sensitivity to
manufacturing variation in other linear‐compliant assemblies whose functionality is
dependent on applied loads.
Chapter 5 of this dissertation further considers the effect of variation on automotive
seat rail assemblies by conducting a tolerance synthesis with FE model simulations to
identify optimal production tolerances.
3. Refinement of concept design embodiments through PIDO based DOE analysis and
optimization
This section highlights the benefits of exploring the conceptual and embodiment design
space through DOE analysis and optimization, with a practical case study.
Design analysis and refinement techniques, such as DOE analysis and optimization, allow
exploration of the design space to identify optimum regions in the presence of complex
constraints and competing objectives.
109
The resultant increase of design space knowledge early in the design process, allows for
design improvements to be made when overall project cost commitments are low and
design flexibility is high.
The case study addressed the conceptual design of automotive seat kinematics
consisting of a four‐bar linkage system, which, despite its apparent simplicity, is
associated with a large design space.
The capabilities of PIDO tools were utilised to allow CAE tool integration, and efficient
reuse of models created in the conceptual and embodiment design stages, to rapidly
identify optimal regions in the design space.
An identified Pareto‐optimal concept was selected for detail design and manufacture.
The design was subsequently commercialised in the seat assembly of the Tesla Motors
Model S full‐sized electric sedan. Benchmarking has shown the selected design to offer
the best performance among commercial competitors in achieving the vertical seat
travel objective with the least number of manual actuations (these are actuations
required to lift the seat for a given fixed lift effort).
The PIDO based DOE analysis and optimization approach to exploring the design space in
the conceptual and embodiment design stages can be broadly applied. An example is
provided focusing on the conceptual design of wheelchairs.
The methods presented in this chapter have been demonstrated to enhance the design
process by offering rapid implementation, low analysis cost as well as accurate and reliable
outcomes. Practical conceptual and embodiment design problems are considered, and
effective solutions developed for a number of industry focused scenarios. The outcomes
allow designers to make informed decisions which positively influence the design early in
the design process while cost commitments are low.
The statistical tolerance analysis conducted in this chapter (Sections 3.4.4) was achieved
according to a PIDO tool based tolerance analysis platform. This platform is developed in
Chapter 4 of this dissertation.
The method developed in this chapter for analysing the sensitivity to manufacturing
variation in linear‐compliant assemblies (Section 3.4.3.1) is not applicable to a more general
class of problems. In response to this limitation, a novel method for tolerance analysis of
assemblies whose functionality is dependent on applied loads is developed in Chapter 4.
Limitations identified in this chapter associated with the high computational cost of
uncertainty quantification in statistical tolerance analysis (Section 3.4.8) are addressed
further in Chapter 5 of this dissertation.
110
4 NOVEL APPROACH FOR PIDO BASED
TOLERANCE ANALYSIS OF ASSEMBLIES
SUBJECT TO LOADING
4.1 Chapter summary
Due to the stochastic nature of manufacturing processes, mechanical assemblies are subject
to variation. The influence of variation on assembly functionality can be estimated with
tolerance analysis. Numerous Computer Aided Tolerancing (CAT) tools have been proposed
that address tolerance analysis problems in complex mechanical assemblies; however in
Chapter 2 it was identified that current tools do not accommodate a general class of
problem where the functionality of a design is fundamentally dependent on the effects of
loads such as external or internal forces (Section 2.10). Such loads influence assembly
functionality through effects such as compliance, dynamics and mechanical wear and are
particularly relevant in, for example: mechanical actuators, automotive seat positioning
mechanisms, and sheet metal assemblies (such as automotive or aerospace body panels).
This chapter addresses the limitation of CAT tools to accommodate assemblies under
loading by developing a tolerance analysis platform which integrates CAD, CAE and
statistical analysis tools using Process Integration and Design Optimization (PIDO) software
capabilities. The platform extends the capabilities of traditional CAT tools by enabling
tolerance analysis of assemblies in which assembly characteristics are dependent on loads
such as external and internal forces. To demonstrate the capabilities of the developed
platform, examples of tolerance analysis problems involving compliance and multi‐body
dynamics are presented.
4.2 Introduction
The stochastic nature of manufacturing processes results in variation which directly affects
the functionality and cost of manufactured products. Accommodating the effects of
manufacturing variation early in product design is paramount to achieving competitive
quality, cost and development time targets. Manufacturing variation is quantified by
111
tolerances and manufacturing process characteristics. The influence of variation on
assembly functionality can be estimated with tolerance analysis. A general tolerance
analysis procedure is depicted in Figure 4.1. Tolerances specify the allowable variation
around a nominal parameter value (Section 2.3.2). The geometry, size, position and
orientation of toleranced features are described according to Geometric Dimensioning and
Tolerancing (GD&T) standards (e.g. ISO 1101 (ISO 2005) and ASME Y14.5M (ASME 2009)).
Assembly parameters which are of particular relevance to functionality are referred to as
Key Product Characteristics (KPCs) (Section 3.2.4). KPCs are traditionally geometric, such as
clearances or nominal dimensions. This work extends the definition of KPCs to any
parameter of relevance to assembly functionality (such as a force, pressure, stiffness,
coefficient of friction, response time etc.). This broader definition allows this work to be
applied to accommodate novel tolerance analysis problems in which assembly functionality
is subject to loads. Estimating assembly functionality requires the definition of an assembly
response function which defines KPCs in terms of the assembly parameters. The assembly
response function may be explicitly defined by an algebraic expression, or may be implicitly
defined within a numeric model (e.g. a CAD assembly or CAE model, for example stage 2 and
3 of Figure 4.1). Upper and lower specification limits (USL and LSL, respectively) applied to
KPCs define targets beyond which product functionality is compromised (e.g. Figure 4.1,
stage 4). The manufacturing yield is defined as the percentage of assemblies which conform
to the specification limits of all KPCs (e.g Figure 4.1, stage 5). The yield requirement can be
set according to worst‐case or statistical tolerancing principles. Statistical tolerancing allows
for component tolerances to be relaxed to enable reduction in manufacturing costs (Section
2.3.1).
112
Figure 4.1 ‐ General tolerance analysis of a mechanical assembly. Stages are identified as per Section 4.2.
Identifying the effects of part variation on the functionality of an assembly (tolerance
analysis), and allocating acceptable part tolerances (tolerance synthesis) are challenging
problems involving the competing objectives of achieving acceptable manufacturing cost
and desired product quality, as well as the functional constraints imposed by the product
design requirements (Hong et al. 2002; Mazur et al. 2010). The designer must analyse the
influence of individual part tolerances on the functionality of the product assembly in order
to determine the expected number of assemblies that conform to functional requirements.
4.3 Effects of loads in tolerance analysis
Assembly functionality can often be sufficiently defined in terms of minimum or maximum
clearances. However assembly functionality may also be dependent not only on dimensional
characteristics but also by how the assembly behaves in response to some applied action
113
such as a force, temperature change, or electromagnetic interaction. These actions are
generally referred to as loads. Common loads in mechanical assemblies are internally or
externally applied forces. External forces are independent of part mass and are applied to
the part boundary; for example friction and contact forces. Internal forces occur due to
inertial effects, and are applied through the centre of mass; for example gravitational forces
and dynamic effects. Internal and external forces act to influence KPCs such as assembly
dimensions dependent on part compliance (Section 4.5) or assembly functions dependent
on friction and dynamic effects (Section 4.6). The ability to accommodate internal and
external forces in tolerance analysis allows for an increased level of capability in estimating
the effects of variation on functionality. Examples where assembly functionality is
dependent on external and internal forces include: controlled deformation, precise fit
requirements, wear of interfacing components, dynamic effects and fluid interactions.
A number of analytical and numerical methods have been proposed for addressing
tolerance analysis and synthesis problems in complex product assemblies, in particular for
assemblies subject to loads (Sections 2.3.2 and 2.9). However, a review of the existing
tolerance analysis methods which aim to accommodate assembly loads ((Merkley 1998;
Bihlmaier 1999; Hu et al. 2001; Shiu et al. 2003; Camelio et al. 2004; Imani et al. 2009; Pierre
et al. 2009; Franciosa et al. 2011)) has identified a number of limitations, including (Section
2.9):
Ability to accommodate only single, specific applications (such as sheet metal
compliance or welding‐distortion);
Reliance on specific, custom simulation codes for tolerance modelling with limited
implementation in practical and accessible software tools;
Need for additional expertise in formulating specific assembly tolerance models and
interpreting results.
Additionally, a number of commercial Computer Aided Tolerancing (CAT) tools have been
developed that offer practical tolerance analysis and synthesis capabilities either within
independent software packages, or more commonly through integration with commercial
CAD systems (Section 2.7). However, current commercial CAT tools generally lack the ability
to accommodate tolerance analysis of assemblies whose functionality is dependent on
loading. Although the effects of compliance have been addressed by some available CAT
tools (particularly within sheet metal assemblies as relevant to automotive or aerospace
applications) there remains a lack of ability to accommodate a general class of problem
involving assembly loading (Section 2.9).
114
This chapter presents a novel tolerance analysis platform which integrates the capabilities of
CAD, CAE and statistical analysis tools using Process Integration and Design Optimization
(PIDO) software. The platform extends the capabilities of traditional CAT tools by enabling
tolerance analysis of assemblies which are subject to loads. To demonstrate the capabilities
of the developed approach, case study tolerance analysis problems are presented involving
compliance and multi‐body dynamics. These include:
1. An automotive actuator assembly consisting of a rigid spigot and complaint spring
undergoing compression due to external loading. Functional characteristics require that
clearance is maintained between the spigot wall and the spring at all times, while
minimising overall packaging space.
2. An automotive rotary switch in which a resistive actuation torque is provided by a spring
loaded radial detent acting on the perimeter of the switch body. Functional
characteristics require that the resistive switch actuation torque be within an
ergonomically desirable range.
4.4 PIDO based tolerance analysis platform
To aid in the solution of design problems which involve multidisciplinary engineering
disciplines, a number of Process Integration and Design Optimization (PIDO) tools have been
developed (Section 2.8). PIDO tools act as software frameworks for facilitating the
integration of the standalone capabilities of diverse, discipline specific CAD and CAE analysis
tools. The integration enables PIDO tools to also facilitate automated parametric analysis,
Design of Experiments (DOE) studies, statistical analysis and multi‐objective optimization, in
an interdisciplinary setting (Sobieszczanski‐Sobieski et al. 1997; Hiriyannaiah et al. 2008;
Flager et al. 2009). Interaction between standalone CAD and CAE software is achieved
through commonly embedded scripting capabilities (based on scripting languages such as
JavaScript, Visual Basic, Python or DOS script). For example, the CAD software CATIA
accommodates the ability to execute Visual Basic language scripts. Scripting capabilities of
CAE and CAD tools allow for autonomous:
modification of CAD model parameters;
initialisation of CAE simulations;
recording of the obtained simulation results.
Problem formulation is typically achieved by manually creating a process and logic workflow
that describes the intention of the simulation and integrates with the system model to be
analysed. The workflow establishes links between external CAE models (through scripting
115
capabilities), the model input and output parameters, and a simulation procedure so that
the model under analysis can be subjected to an automated parametric study. Once the
workflow is established, an automated evaluation of the external model for the specified
input parameters can be carried out without user input. During a simulation, each of the
parameter combinations established as part of the simulation procedure is automatically
applied to the external CAE model and any affected model output parameters of interest
are read and recorded. At the completion of the simulation, statistical tools available in the
PIDO tools can be applied to analyse the recorded results.
The interdisciplinary integration, DOE and statistical data analysis capabilities of PIDO tools
can be utilized to address tolerance analysis problems requiring numerical modelling and
simulation of the effects of loads on an assembly. The following sections present a PIDO tool
based tolerance analysis platform developed for assemblies subject to loads, such as
mechanical actuators and automotive seat positioning mechanisms.
4.4.1 Platform flowchart
The proposed PIDO tool based tolerance analysis platform for assemblies subject loads is
presented in Figure 4.2. It can be implemented procedurally with the sub‐elements briefly
defined below (additional detailed discussion is presented in the identified sections):
1. Parametric CAD model (Section 4.4.2)
CAD models for each part of the product assembly are defined, including tolerance types
and datums for features of interest as well as part relationships such as assembly sequence
and mating conditions. Assembly response functions defining dimensional and geometric
KPCs are captured implicitly within the CAD assembly.
2. Physical model simulation (Section 4.4.3)
CAD models are exported to a CAE tool and subjected to a numerical analysis simulating the
effect of loads on assembly functionality. Assembly response functions defining KPCs are
captured implicitly within the CAE model (e.g. Sections 4.5 and 4.6).
3. Uncertainty Quantification (Section 4.4.4)
An estimate of the assembly yield by statistical tolerance analysis requires that the
stochastic variation of part parameters be propagated through the assembly model. Various
uncertainty quantification (UQ) methods are available to estimate the statistical moments
of the associated KPC distributions. UQ method selection should be according to the specific
requirements of the analysis scenario.
116
4. Variation database (Section 4.4.5)
Distributions of the expected variation in each part parameter are defined in the variation
database. The parameters can be dimensional, geometric (GD&T), material or associated
with loading. Unique sets of parameter values are selected from the variation database to
be used in uncertainty quantifications. Simulations output are recorded within the database.
5. Yield estimation (Section 4.4.6)
Dimensional and geometric KPCs (CAD assembly) and KPCs defined by loading (CAE
simulation) are recorded in the variation database. Statistical moments of the associated
KPC distributions are evaluated using the applied UQ method. Yield estimates are calculated
from the statistical moments of KPCs.
117
Trial Tolerances
TOLERANCE ANALYSIS
UNCERTAINTY QUANTIFICATION
Evaluate each set of part
parameter values VARIATION DATABASE
Simulation INPUT
Parametric CAD model Expected variation in part
parameters:
Part geometry
Dimensional
Tolerance types (GD&T) Geometric
Parametric model accommodates Material
possible variation due to specified Loads
tolerances
Simulation OUTPUT
Parametric CAD assembly
Key Product Characteristics:
Dimensional
Part relationships
Geometric
Assembly sequence
Material
Implicit assembly response functions
Loads
(Dimensional/Geometric KPCs)
Analysis N
Record infeasible
feasible? design
Y
CAE simulation: Assembly CAE simulation: Part
Implicit assembly response Possible separate simulation
functions (KPCs dependent on for individual part
loads) E.g. compliance, friction,
E.g. compliance, friction, multi‐body dynamics
multi‐body dynamics
Record Key Product Characteristics
Simulation outputs
Statistical moments
N
Evaluations
complete?
Y
YIELD ESTIMATION
Yield estimation:
Estimate assembly yield and PCIs
Identify relative effect of contributors
Figure 4.2 ‐ PIDO based tolerance analysis platform
Compared to existing approaches for accommodating loading in tolerance analysis, the
proposed platform has the following unique characteristics:
Tolerance modelling is conducted within existing CAD/E tools using parametric models,
and scripting interface enabled by PIDO tool integration. The need for additional
modelling tools and expertise is subsequently reduced.
118
The use of standalone CAE modelling tools (for example popular FE modellers like ANSYS
or ABAQUS), allows sophisticated abilities in modelling the effect of various loads on
mechanical assemblies.
These characteristics are discussed in detail in the following sections.
4.4.2 Parametric CAD model
CAD software modelling typically involves a history based approach where three‐
dimensional solid geometric models are created from two‐dimensional sketches subject to
three‐dimensional operations such as extrusions, sweeps and lofts. CAD assemblies are
defined by multiple CAD parts whose interaction is constrained to restrict the degrees of
freedom between parts. Changing part geometry requires reverting the model to the
relevant point of change (such as a sketch), updating dimensions and relationships, and re‐
executing subsequent modelling operations in series. If any part geometry is modified, the
associated assembly constraints also need to be re‐evaluated to rebuild the assembly
model. Model dimensions and relationships can be defined parametrically, providing a
means of implementing tolerance analysis by varying individual dimensions, either by worst‐
case or statistical approaches.
Parametric CAD based modelling is however subject to some limitations, such as (Section
2.4.2):
Due to their history based nature and need to serially re‐execute modelling operations
for any parameter changes, parametric CAD models may be computationally expensive
for statistical tolerance analysis requiring a large number of model evaluations.
Limitations in representing intermittent part contact (for example Figure 2.6).
Datum precedence according to GD&T standards can be difficult to accommodate in
certain scenarios (Shah et al. 2007).
To address some of the limitations of parametric CAD based tolerancing, alternative
approaches have been implemented in commercial CAT tools which use independent
tolerance models in addition to CAD model geometry (Section 2.4.3). The general approach
involves importing the CAD model into the CAT system and interactively creating an
abstracted geometry model superimposed on the original CAD data (Chase et al. 1995;
Salomons et al. 1995; Prisco et al. 2002; Chiesi et al. 2003; Shah et al. 2007). The abstracted
model describes the possible part variation, part mating relationships and resultant
assembly response functions without the rebuild penalty associated with CAD history‐based
model construction. However, a number of limitations are also associated with the
abstracted geometry CAT approach (Section 2.4.3);
119
Additional expertise, tools and time are required to define the abstracted geometry
model and interpret simulation results.
Limited ability to accommodate tolerance problems involving assemblies under loading
and current inability to integrate with external CAE modelling tools.
It is difficult to accommodate all possible variation aspects defined in GD&T standards
due to the point‐based nature of the abstracted geometry systems (Shah et al. 2007).
Comparison of the limitations of parametric CAD based and abstracted geometry tolerance
modelling shows that the parametric CAD approach is more suited to tolerance modelling of
assemblies under loading as the limitations of parametric CAD are not significantly
prohibitive in such an application. For instance, CAD software models may be integrated
with standalone CAE software through commonly embedded scripting capabilities allowing
for modelling of the effects of loading on assembly functionality. The rebuild penalty
associated with parametric CAD models is rendered comparatively less significant, within
the scope of the proposed platform, as the computational cost of the CAE simulation is
typically much greater than the cost of the CAD model update. Additionally, the limitations
in representing intermittent part contact can be managed by utilising clash detection
capabilities typically featured in CAD software that can report the location and magnitude of
any unexpected interference (for example Section 3.3). This capability is utilised in the
proposed platform to determine assembly yield where interference violates functional
requirements.
Consequently, due to the limitations of abstracted geometry systems, parametric CAD
modelling is adopted in this work to represent part feature variation. Example applications
of parametric CAD tolerance modelling are shown in the case studies presented in this
chapter.
4.4.3 Physical model simulation
The effects loads on mechanical assemblies are typically analysed by CAE simulations, which
are not directly compatible with traditional CAT tolerancing tools. To accommodate these
effects, PIDO tool can be utilised to integrate the parametric CAD data with CAE analysis
tools for modelling of loading. This analysis may be performed on the entire assembly (as in
Section 4.6), or on a subset of parts (as in Section 4.5). Computational cost is dependent on
the fundamental nature of the analysis and can range from: inexpensive analysis of simple
assemblies, where the analysis time is comparable to the time associated with the CAD
120
model update; to, high fidelity simulations for which the computational time may be orders
of magnitude higher than the CAD model update time.
4.4.4 Uncertainty quantification strategy
The objective of statistical tolerance analysis is to provide an estimate of the assembly yield
in the presence of manufacturing variation. This yield estimate requires that the statistical
moments of the KPC distributions be known. Uncertainty quantification methods can
estimate KPC distributions by propagating the expected manufacturing variation in part
parameters, through the assembly response function. The CAE methods applied in this
platform are based on a numerically defined implicit model, as such it is required that the
UQ methods be compatible with an implicit response function.
A number of compatible UQ methods have been presented in the literature (Nigam et al.
1995; Lee et al. 2009), including: Full Factorial Numerical Integration, Univariate Dimension
Reduction, Polynomial Chaos Expansion, Monte Carlo simulation and Taguchi method
(Section 2.5). The relative merit of these UQ methods is dependent on the intent and
constraints of the engineering design problem, including the: dependence on the
dimensionality of the problem, complexity of implementation, computational cost and
available computational resources and the required confidence level in the KPC prediction.
The Monte Carlo (MC) simulation is used in this research as it is well understood, robust and
easily implemented within the platform (Section 2.5.1.1). Furthermore, the results of the
MC simulation can provide a performance baseline for assessing the relative merit of
alternate UQ methods which may offer enhanced performance when applied in this
platform. Chapter 5 details the application of alternative UQ methods to tolerance analysis
and synthesis.
Studies suggest that approximately 1000 samples are required to provide sufficient accuracy
in an assembly tolerance analysis problem (Gao et al. 1995). These recommendations were
applied in the case studies presented in this chapter.
4.4.5 Variation database
The quality of a manufacturing process can be quantified by how consistently and accurately
it produces the desired process outputs. The variation in each part is quantified by statistical
measurements of the particular manufacturing process output. Tolerances specify the
allowable variation around a nominal parameter value between the lower specification limit
(LSL) and the upper specification limit (USL). The expected distribution of each part
121
parameter is defined within the variation database. A set of parameter values is selected
from the variation database to be used in the uncertainty propagation simulation (Section
4.4.4).
The ability of a manufacturing process to generate outputs consistently and accurately
within the specification limits can be measured using Process Capability Indices (PCI) such as
, and indices (Section 2.3.4). These indices compare the specification limits to the
6σ limits of the manufacturing process distribution, i.e. 99.73% of the predicted population,
where a higher process index indicates a more accurate process. In this work PCIs are
applied to quantify the expected variation distribution for each parameter.
4.4.6 Yield estimation
Yield is calculated from the statistical moments of assembly KPCs estimated with UQ
methods. For UQ based on MC simulation, the model is repeatedly evaluated to generate a
histogram of the predicted variation in KPCs. Standard statistical techniques are used to
calculate the associated moments and resultant assembly PCIs. Contributor analysis based
on a Student’s T‐test is carried out to compute the influence of an individual part parameter
on the KPC based on a correlation analysis (Jackson 2011). Correlation coefficients for each
parameter are presented hierarchically, for example (Section 4.5.5).
Implementation of the PIDO based tolerance analysis platform involving external and
internal forces is presented in the following section using two representative case studies.
122
4.5 Case study 4.1 ‐ Assembly design subject to external forces
4.5.1 Problem definition
The case study involves a tolerance analysis problem where product functionality is defined
by compliance of part geometry due to external forces. It consists of a spring and spigot
assembly which is to be used by an industry partner in an automotive actuator mechanism
(Figure 4.3). The spigot is an injection moulded polymer component which is assembled
with a steel helical compression spring. The spring ends are squared and ground. The design
objective is the minimisation of the spigot volume due to strict restrictions imposed on the
actuator packaging space. The product specification has imposed a series of rigorous
constraints:
Functionality of the actuator constrains the force‐extension characteristics of the spring
and consequently its nominal dimensions.
The minimum bore wall thickness, , is constrained by structural strength requirements.
The accuracy of the manufacturing machinery cannot be increased due to strict
manufacturing cost targets. Nominal dimensions however can be varied.
The manufactured spigot and spring assemblies must exceed a of 1. This
requirement corresponds to an assembly yield of 99.7%.
Figure 4.3 ‐ Spring and spigot assembly.
123
Functional characteristics of the actuator require that clearance is maintained between the
spigot wall and the spring. As the spring is compressed from its nominal position, both the
inner and outer diameters of the spring increase. Consequently:
Clearance between the inner diameter of the spring and the spigot wall is minimal when
the spring is at free height. The associated assembly response function can be evaluated
without modelling external and internal forces.
Clearance between the spring outer diameter and spigot wall is reduced as the spring
compresses. The associated assembly response function cannot be evaluated in a CAD
environment without modelling the compliance of part geometry due to external forces.
No analytic solution directly applicable to the dilation of a squared and ground
compression spring was found in the literature. This problem is therefore a suitable case
study for the proposed method as numerical models are necessitated. A solution for an
open‐coiled spring with ends fixed against rotation is presented in (Wahl 1963). This
model was used to validate a FE model comparable to the one used in this case study.
The platform proposed in this work is applied to overcome these limitations (Section
4.5.5).
The KPCs for the product assembly in this case study are the minimum clearance between:
the internal diameter of the spring and the spigot wall ( )
the outer diameter of the spring and the spigot wall ( )
The case study objective is to specify a nominal value for the control variable ( ),
which minimises packaging space subject to the required yield of 99.7%.
4.5.2 Sources of variation
The product under consideration comprises of polymer injection mouldings and coiled wire,
each with unique manufacturing variation characteristics.
4.5.2.1 Variation in injection moulding
Dimensional variation in injection moulded plastics can be attributed to mould
characteristics, resin processing and shrinkage. Mould related variation arises from
deviation in the mould cavity dimensions, deterioration of the mould with service life, and
positional accuracy of movable mould sections. Variation due to the processing of resin can
be attributed to resin and mould temperature, clamping pressure, uniformity of resin
constitution and humidity levels ((DIN) 1982). When plastic resin is injected into a moulding
cavity it is above the resin melting temperature. The resin is then rapidly cooled in the
124
mould. Once set, the moulding is removed and further cooled to ambient temperature. Due
to phase changes of the resin and differential cooling, the final moulded part dimensions are
subject to variation ((DIN) 1982; Rosato et al. 2000).
4.5.2.2 Variation in spring wire
The performance of coil springs depends on numerous design and manufacturing
parameters. Functional requirements typically dictate the required spring rate or force at a
specified deflection. These attributes are a function of geometric parameters: wire
diameter, mean coil diameter, number of active coils and free length. Spring performance is
also dependent on material properties such as strength, shear modulus and plastic
formability, which can vary significantly between batches (DeFord 2003). Small variation in
the mechanical and surface frictional properties of wire material may significantly affect the
manufacturability of the springs and introduce broad variation in the process outputs
(Wood 2006). Additional dimensional variation may be introduced during manufacturing by
the spring coiling machinery.
The sources of variation present in the manufacture of springs significantly influence their
performance and their dimensional envelope under compression, for example (DeFord
2003). Industry standard tolerances exist to limit the effects of such variation, for example
((ASTM) 2007).
4.5.3 Variation data used in simulation
To estimate expected dimensional variation, the existing moulding process used for the
manufacture of the spigot was analysed and assessed in terms of its performance. The
manufacturer of the spigot assembly specifies dimensional tolerances on injection moulded
components according to specification limits recommended by DIN 16901 ((DIN) 1982). The
capability of the manufacturer’s injection moulding process to generate outputs within the
specified limits was determined by metrological assessment of 1600 production part
samples of similar geometry to the case study spigot. The production parts are
manufactured using multiple moulding cavities, with each having its own variation
characteristics. The resultant parts were analysed for variation within a single cavity and
across multiple cavities. Process capability indices were determined (Table 3.1) and used as
input for the case study analysis. The metrological data was also utilised in Case study 3.1
(Section 3.3.2) due to a common industry partner and manufacturing process.
125
Table 4.1 ‐ Process capability data of measured component in Test Case 4.1.
Parameter Target Achieved
Mean (mm) 16 15.986
σ (mm) ‐ 0.0457
LSL (mm) 15.850 ‐
USL (mm) 16.150 ‐
Cp 1.00 1.10
Cpk 1.00 1.01
Cpm 1.00 1.05
% < LSL 0.15 % 0.15 %
% > USL 0.15 % 0.02 %
% Total out of spec. 0.30 % 0.17 %
The expected spring tolerances were defined by the spring suppliers own tolerance data
(Table 4.2). These specification limits exceed those reported as typical in the literature, e.g.
(Hindhede 1983).
Table 4.2 ‐ Spigot and spring assembly parameters and associated variation.
(Note: Spigot specification limits are specified from DIN 16901, Cpm from measurements.
Spring specification limits and Cpm from supplier’s quality data.)
Nominal Specification
Component Parameter Description Min [mm] Max [mm] Cpm σ
[mm] Limits +/‐ [mm]
4.5.4 Simulation model
The tolerance analysis platform presented here utilises CAD and CAE tools for parametric
modelling and compliance analysis, respectively. A parametric Finite Element (FE) model of
the coil spring was constructed using CATIA CAE software (Figure 4.4). The model consisted
of approximately 42 000 3D parabolic tetrahedral elements with 11 000 corresponding
nodes. Spring loading conditions were simulated by applying rigid restraints to the base of
the spring and a displacement of 8 mm to the top face as per the functional characteristics
of the actuator. Individual simulation computation time was approximately 100 seconds on
a 1.86 GHz CPU.
126
Figure 4.4 ‐ Spring spigot assembly and FE model of spring.
A PIDO tool (ESTECO modeFRONTIER) was interfaced with CATIA CAD and FE tools (Figure
4.5) according to the proposed platform (Figure 4.2). A variation database was initialised
from the obtained tolerance data (Table 4.2) and subjected to a Monte Carlo simulation
consisting of the following automated stages:
1. CAD models of the spigot and spring were updated.
2. The undeformed spring was assembled with the spigot and a measure of minimum
internal diameter clearance was recorded ( ).
3. A finite element model of the spring was generated and subjected to displacement
conditions.
4. The deformed finite element spring model was assembled with the spigot and a
measure of minimum outer diameter clearance was recorded ( ).
5. Clash analysis was conducted to identify unintended interference.
127
Figure 4.5 ‐ PIDO tolerance analysis workflow for Case study 4.1.
4.5.5 Simulation results
The MC simulation consisted of 1000 assembly variants. A database was generated from
clearance measures and clash analysis data obtained for each assembly variant. A normal
distribution curve was subsequently fitted to the simulated histograms for both KPCs (Figure
4.6 and Figure 4.7). The theoretical distribution curve predicts the yield of the population of
assemblies from the simulated samples, by comparison with the set specification limits
(defined for 99.7% yield, Section 4.5.1).
Clearance measurements result from the assembly response function that intrinsically
depends on the simulation input parameters. The magnitude of correlation between input
parameters and the assembly response function provides insight into the most pertinent
input parameters (Figure 4.8). This outcome enables quality control to be applied relative to
the importance of the identified contributing factor.
128
Figure 4.6 ‐ Histogram of clearance measurements for spigot outer diameter (ODmeasure).
(Note: Solid line indicates estimated population distribution based on sample results. The initial analysis
provided a yield of approximately 96.8 % for the spring outside diameter.)
Figure 4.7 ‐ Histogram of clearance measurements for spigot inner diameter (IDmeasure).
(Note: Solid line indicates estimated population distribution based on sample results. The initial analysis
provided a yield of approximately 97.1 % for the spring inside diameter.)
A Student t‐test was conducted to quantify the contribution of part parameter variation to
KPC variation. Figure 4.8 and Figure 4.9 summarise the effects of part parameter variation
on the KPC ( and ). The relative effect of each input parameter on the
output is quantified by the effect size. A positive effect indicates a positive relationship
129
between the input and output variables, a negative effect size indicates a negative
relationship.
The simulation results show that mean spring coil diameter , has the highest influence
on the clearance between the spring and spigot wall. The outer and inner spigot wall
diameters, and , respectively, have a substantially lower influence. The
result can be attributed to the difference in the magnitude of variation between the spring
mean diameter, and the spigot wall diameters. As established in Table 4.2, the
characteristics of the manufacturing processes result in an expected variation in the spring
mean diameter that is substantially higher than that in the spigot diameters.
Figure 4.8 ‐ Student chart of IDmeasure
(Note: A large positive effect size indicates a strong direct correlation;
negative values indicate an inverse relation)
Figure 4.9 ‐ Student chart of ODmeasure
(Note: A large positive effect size indicates a strong direct correlation;
negative values indicate an inverse relation)
130
4.5.6 Outcomes
The yield requirement for the assembly is 99.7% (Section 4.5.1) implying that the
specification limits must occur at 3 standard deviations from the distribution mean. Due to
the clearance requirements, the LSL for both of the KPCs and is zero.
The USL has no restriction as it does not compromise functionality. The simulation results
indicate a 97.3% conformance (27 assemblies out of 1000 assemblies result in no clearance)
for . The conformance for is 97.0%. The required yield can however be
Due to the characteristics of the manufacturing process used, the change in standard
deviation of the spigot parameters due to a small modification of the nominal parameter
values is negligible. Table 4.3 indicates the nominal dimensions required to achieve the
target yield of 99.7% (i.e. USL = mean ‐ 3σ) as inferred from the simulation outcomes.
Table 4.3 ‐ Initial and required nominal spigot wall dimensions based on simulated clearance measurements.
Initial (mm) Required (mm)
Parameter
Meani σi Meani ‐3σi Meani + 3σi MeanR σR MeanR ‐3σR MeanR+ 3σR
IDpocket 22.200 0.0303 22.109 22.291 22.283 0.0303 22.192 22.374
ODpocket 30.750 0.0303 30.659 30.841 30.825 0.0303 30.734 30.916
IDmeasure 0.175 0.086 ‐0.083 0.433 0.258 0.086 0 0.516
ODmeasure 0.174 0.083 ‐0.075 0.423 0.249 0.083 0 0.498
4.5.7 Potential sources of error
Due to precision limitations of internal CAD software measurement tools as well as
geometry approximations due to finite‐element tessellation, errors are introduced into the
measured KPC values.
KPC values are reported by the measurement tools within the associated CAD environment.
These tools may have inherent precision limitations which contribute to uncertainty in the
reported outcomes. For the case study applied in this work, the CATIA measure tool was
used to measure clearances between assembled parts. Due to internal software
characteristics, the CATIA measure tool was able to provide only a close approximation of
the exact clearance. To quantify the level of approximation, a MC simulation consisting of
1000 samples was carried out to measure a known clearance. The differences between the
measured and known values (Figure 4.10) show that the error introduced by approximations
of the CATIA measure tool (mean of 2.25 x 10‐3 mm) are more than an order of magnitude
less than the smallest associated tolerances (tolerance on parameter of 50.00 x 10‐3
mm, Table 4.2) and do not significantly affect the case study results.
131
Figure 4.10 ‐ Histogram of clearance measurement error
Finite‐element tessellation, also known as faceting, denotes the discrepancy between the
finite‐element mesh and the actual geometry. This discrepancy can be mitigated by
increasing the number of mesh elements until the associated error is sufficiently low. A
comparison was made of the ideal and meshed spring models to determine the effect of
faceting. Figure 4.11 shows a visual comparison of the ideal and meshed models indicating
regions of difference equal to the smallest tolerance range used in the simulation (tolerance
on parameter of 50.00 x 10‐3 mm, Table 4.2). The result shows that the mesh density
results in minimal geometry approximation and does not significantly affect the outcomes
of the test case simulation.
Figure 4.11 ‐ Comparison of original and meshed spring geometry. Light shade indicates a difference of mesh
geometry from original by 50.00 x 10‐3 mm (smallest tolerance used in simulation)
132
4.6 Case study 4.2 ‐ Assembly design subject to both external and internal forces
4.6.1 Problem definition
This case study involves a tolerance analysis problem where product functionality is defined
by both external and internal forces (friction and multi‐body dynamics). A rotary switch and
spring loaded radial detent assembly (Figure 4.12) is intended to provide positional
restraint, with a certain resistive torque, for operational control in a Human Machine
Interface (HMI) capacity (such as a headlight control switch in an automobile). The model
shown is a simplified representation which omits details not affecting functionality. The
cylindrical detent is located in a positioning sleeve within which a helical compression spring
biases the cylindrical detent against the switch detent ramp faces.
The peak resistive torque is a KPC of the assembly. The resistive torque depends on:
the geometry of part features
internal forces due to part acceleration
external forces, including the spring force, contact forces between components; and the
friction coefficient dependent friction force between components in contact.
A sufficient resistive torque is required to provide ergonomically and functionally adequate
positional restraint while providing a positive impression of product quality for the user.
Excessive variation in the peak resistive torque of manufactured switch assemblies has a
negative impact on perceived product quality. The design variables considered in the
simulation are shown in Table 4.
The product requirements dictate a series of constraints:
A nominal peak resistive torque of 75 Nmm has been experimentally identified as
desirable for the intended application.
The nominal peak resistive torque specification limit has been set at 75 ± 7 Nmm with a
process capability requirement of 1, i.e. 99.7% assembly yield.
The rotary switch, radially acting cylindrical detent and positioning sleeve are all
injection moulded polymer components.
The case study objective is to specify required process capability for the part parameters,
such that the peak resistive torque (KPC) specification requirements are achieved with an
assembly yield of 99.7% ( 1). An increase in manufacturing process precision can be
accommodated if required to achieve the designated assembly yield.
133
Figure 4.12 ‐ Rotary switch and spring loaded radial detent assembly model used in Case study 4.2
(Note: Linear dimensions in mm. Variation in non‐enclosed dimensions not considered in simulation)
4.6.2 Sources of variation
The product under consideration is comprised of polymer injection mouldings and coiled
wire, each with unique manufacturing variation as discussed in sections 4.5.2.1 and 4.5.2.2.
4.6.3 Variation data used in simulation
The relevant design variables considered in the simulation are shown in Table 4.4. The
component manufacturer specified tolerances on injection moulded components according
to specification limits recommended by DIN 16901, ((DIN) 1982). The specification limits on
the spring rate have been estimated from SAE HS‐795, (SAE 1997). The required process
capability for each parameter was initially set at 1 and the resultant assembly yield
was estimated by simulation.
134
Table 4.4 ‐ Case study 4.2 rotary switch assembly parameters and associated variation.
(Note: Switch and cylindrical detent specification limits are specified from DIN 16901.)
Specification Initial Second
Component Parameter Description Nominal Min. Max.
Limits +/‐ Cpm σ Cpm σ
14.75 15.25
Rswitch Switch radius 15.00 mm 0.25 mm 1.00 0.08 1.00 0.08
mm mm
Switch Angle of ramp
α 30.00o 5.00o 25.00o 35.00o 1.00 1.67 2.00 0.83
face
Yaw angle of
θ 0.00o 3.00 o ‐3.00o 3.00o 1.00 1.00 1 1.00
ramp face
F Spring preload 2.00 N 0.20 N 1.80 mm 2.20 mm 1.00 0.07 2 0.03
Spring
k Spring rate 0.40 N 0.04 N 0.36 mm 0.44 mm 1.00 0.01 1 0.01
Rball Ball radius 3 mm 0.19 mm 2.81 mm 3.19 mm 1.00 0.07 2 0.03
Switch‐detent
Cylindrical µswitch dynamic friction 0.150 0.020 0.130 0.173 1.00 0.008 1 0.008
detent coefficient
Slider‐detent
µslider dynamic friction 0.150 0.020 0.123 0.173 1.00 0.008 1 0.008
coefficient
4.6.4 Simulation model
A parametric numerical model of the switch assembly was constructed in MSC ADAMS
multi‐body dynamics modelling software (Figure 4.12). The model parametrically
accommodates the possible variation within geometric and physical parameters (such as
spring pre‐load, spring stiffness and friction coefficients).
Although a three dimensional assembly model was developed, the simulation was
conducted as a two dimensional problem to reduce simulation time. The dimensionality of
the problem was reduced by constraining the position of the rotary switch and the
cylindrical dial with revolute and translational joints, respectively. Contact between the
cylindrical detent and the rotary switch was modelled using a solid‐to‐solid contact
constraint involving Coulomb friction (nominal values: 0.25 , 0.15 ).
Individual simulation computation time was 14 seconds on a 1.86 GHz CPU. No directly
comparable algebraic model was identified, however, an approximate model was used to
confirm that the predicted results were of a similar magnitude (Canick 1959).
The numerical model was interfaced with a PIDO tool (ESTECO modeFRONTIER) according to
the developed platform (Figure 4.14). A variation database was subsequently initialised from
the obtained tolerance data (Table 4.4) and subjected to a MC simulation consisting of the
following automated stages:
135
1. Dimensional, spring and friction parameters of models were updated.
2. A rotational velocity of 30 degrees per second was imposed on the rotary switch and the
interaction of components simulated for 500 ms.
3. Peak and transient resistive torque were recorded.
Figure 4.13 shows the simulation output for transient resistive torque for 1000 assembly
variants. The peak values were used as a KPC for the assembly.
Figure 4.13 ‐ Transient resistive torque for 1000 assembly variants resulting from initial simulation
136
Figure 4.14 ‐ PIDO tolerance analysis workflow for Case study 4.2.
4.6.5 Simulation results and outcomes
4.6.5.1 Initial simulation
Monte Carlo sampling based UQ was conducted with 1000 assembly variant samples. A
database was generated of peak resistive torque measurements (KPC) for each assembly
variant. A normal distribution curve was subsequently fitted to the simulated KPC histogram
(Figure 4.15) and the expected process capability , was calculated. The simulation
results show that the required assembly yield requirements are not met (achieved
0.62, required 1) with the initially specified process capability requirements for the
designated part parameters (Table 4.4).
The peak resistive torque measurements are the result of an assembly response function
that intrinsically depends on the simulation input parameters. A Student t‐test was
conducted to quantify the contribution of part parameter variation to KPC variation. Figure
4.16 summarises the effects of part parameter variation on the KPC (Peak resistive torque).
The simulation results show that variation in the ramp face angle , and the spring preload
, have the highest influence on variation in the peak resistive torque.
137
1000
Figure 4.15 ‐ Histogram of peak resistive torques obtained from initial simulation
Figure 4.16 ‐ Student chart of peak resistive torque for initial simulation
(Note: A large positive effect size indicates a strong direct correlation;
negative values indicate an inverse relation)
4.6.5.2 Second simulation
Based on the outcome of the initial MC simulation, the required process capabilities for the
most influential parameters and were doubled (Table 4.4) to achieve the required
assembly yield. These values were used as input into a subsequent simulation.
A second UQ simulation was conducted with 1000 samples. The simulation results (Figure
4.17) show that the required assembly yield requirements are met (achieved 1.11,
required 1.15) with the adjusted process capability requirements for the part
138
parameters (Table 4.4). The adjusted process capability requirements reduced the
contribution of parameters and towards variation in the peak resistive torque as seen in
the Student chart in Figure 4.18.
Figure 4.17 ‐ Histogram of peak resistive torques for second simulation.
Figure 4.18 ‐ Student chart of peak resistive torque for second simulation
(Note: A large positive effect size indicates a strong direct correlation;
negative values indicate an inverse relation)
139
4.7 Summary of research outcomes
Managing the effects of manufacturing variation on product functionality is paramount to
achieving competitive quality, cost and development time targets. The functionality of
mechanical assemblies often depends on effects such as compliance and dynamics due to
the action of loads on the assembly; commonly external or internal forces. A review of
existing methods and tools for addressing tolerance analysis in assemblies subject to loads,
has identified a number of limitations, including:
Accommodation of only single load effect scenarios (such as sheet metal compliance or
welding‐distortion).
Need for significant additional expertise in formulating specific assembly tolerance
models and interpreting results.
Reliance on specific, custom simulation codes with limited implementation in practical
and accessible Computer Aided Tolerancing (CAT) software tools.
This chapter presented a tolerance analysis platform which overcomes these limitations by
providing the capability to accommodate the effects of assembly loads, through integration
of CAD, CAE and statistical analysis tools. This is achieved with the interdisciplinary
integration capabilities of Process Integration and Design Optimization (PIDO) tools.
To demonstrate the capabilities of the platform, case study tolerance analysis problems
involving assemblies subject to loads are presented. These include:
1. An assembly consisting of rigid and compliant components subject to external forces.
The tolerance analysis platform was applied to identify that for initially specified
tolerances the assembly yield was unacceptable. The nominal dimensions required to
achieve the desired assembly yield were subsequently identified.
2. An assembly in which functionality is defined by external forces and internal multi‐body
dynamics. The platform was applied to identify the tolerances required to achieve the
required assembly yield.
Key outcomes of this work are as follows:
The proposed platform extends the capabilities of traditional CAT tools and methods by
enabling tolerance analysis of assemblies which are dependent on loads such external
and internal forces. Traditional CAT tools are not able to accommodate a general class of
problem involving assemblies under loading.
The ability to accommodate the effects of loading in tolerance analysis allows for an
increased level of capability in estimating the effects of variation on functionality.
140
The interdisciplinary integration capabilities of the PIDO based platform allow for CAD/E
models created as part of the standard design process to be used for parametric CAD/E
based tolerance analysis. The need for additional modelling tools and expertise is
subsequently reduced.
The platform allows the use of standalone CAE modelling tools (for example popular FE
modellers like ANSYS or ABAQUS), which offer sophisticated abilities in modelling the
effect of various loads on mechanical assemblies. As such, the application of the
platform can be extended to accommodate tolerance analysis of assemblies subject to a
more general class of loading (for example thermo‐mechanics or fluid flow) as well
challenging scenarios involving transient or non‐linear effects.
Despite the enabling capability of the proposed platform in accommodating tolerance
analysis of assemblies under general loading, limitations also exist. These limitations are
summarized below and presented in context in the identified sections:
Computational costs associated with CAD model updates (Section 4.4.2) and the
associated CAE simulations (Section 4.4.3) may be significant.
Parametric CAD may present some limitations (Section 4.4.2) in accommodating all
possible variation types defined in GD&T standards, and in representing realistic part
interactions for a given set of assembly constraints.
Approximation errors such as tessellation may be associated with FE models (for
example, Section 4.5.7).
Some CAD and CAE software tools may be limited in certain aspects of their ability to be
autonomously controlled through scripting capabilities.
The main limitation of the presented statistical tolerance analysis platform is the potentially
high computational cost associated with uncertainty quantification. Traditional UQ methods
such as Monte Carlo sampling require a large number of model evaluations to accurately
estimate statistical moments of the model response. For tolerance models which are
computationally demanding to evaluate (such as FE models of the effects of loads on
mechanical assemblies) a large number of model evaluations can significantly compound
the overall computational cost. Consequently, for demanding models, statistical tolerance
analysis of assemblies subject to loads may be computationally impractical with traditional
UQ methods. Additionally, manual iteration of tolerance analysis can be time consuming
and ineffective at identifying tolerances which optimally achieve cost and yield targets.
Tolerance synthesis can significantly improve this effectiveness by guiding the search for
optimal tolerances with an efficient optimization algorithm. However, tolerance synthesis
141
may require many iterations of tolerance analysis and the computational costs of solving
numerical tolerance models and UQ are compounded making tolerance synthesis highly
impractical with traditional UQ methods.
Chapter 5 focuses on addressing the limitations of this platform associated with high
computational cost of tolerance analysis with traditional UQ methods by investigating the
use of recently developed analytical UQ methods with significantly higher efficiency.
142
5 PIDO BASED TOLERANCE SYNTHESIS IN
ASSEMBLIES SUBJECT TO LOADING USING
POLYNOMIAL CHAOS EXPANSION
5.1 Chapter summary
Statistical tolerance analysis and synthesis in assemblies subject to loading is of significant
importance to optimised manufacturing. Modelling the effects of loads on mechanical
assemblies in tolerance analysis typically requires the use of numerical CAE simulations. The
associated Uncertainty Quantification (UQ) methods used for estimating yield in tolerance
analysis must subsequently accommodate implicit response functions and techniques such
as Monte Carlo (MC) sampling are typically applied due to their robustness. Sampling
methods require a large number of iterations to accurately estimate associated statistical
moments. For non‐trivial scenarios, each iteration of the numerical simulation is typically
computationally expensive. Consequently, statistical tolerance analysis involving assembly
loads is often computationally impractical. Identifying optimum tolerances with tolerance
synthesis requires multiple iterations of tolerance analysis, further increasing the
computational costs and making tolerance synthesis highly impractical for demanding
tolerance models.
A variety of UQ methods have been proposed with potentially higher efficiency than MC
sampling. These offer the potential to increase the practical feasibility of tolerance analysis
and synthesis of assemblies subject to loading. This chapter investigates the feasibility of
Polynomial Chaos Expansion (PCE) in tolerance analysis for uncertainty quantification. A
previously developed Process Integration and Design Optimization (PIDO) tool based
tolerance analysis platform is further extended to allow multi‐objective, tolerance synthesis
in assemblies subject to loading. The process integration, Design of Experiments (DOE) and
statistical data analysis capabilities of PIDO tools are combined with highly‐efficient UQ
methods for optimization of tolerances to maximize assembly yield while minimizing cost.
Industry‐focused case studies are presented which demonstrate that the application of PCE
143
based UQ to tolerance analysis and synthesis can significantly reduce computation time
while maintaining accuracy.
5.2 Introduction
Statistical tolerance analysis and synthesis based on traditional simulation methods is
computationally expensive for complex assemblies requiring numerical modelling, in
particular when assembly functionality is subject to the effects of loading. Assembly yield is
calculated from statistical moments traditionally estimated using sampling‐based
Uncertainty Quantification (UQ) methods such as Monte Carlo (MC) simulation. The
computational cost can be high due to poor efficiency of sampling‐based techniques.
Addressing a tolerance synthesis problem based on sampling‐based UQ further compounds
the computation cost as each trial of assembly tolerances requires a computationally
expensive MC based yield estimate. Consequently, tolerance synthesis within complex
assemblies is often a computationally impractical problem.
Alternative UQ methods have been proposed with significantly higher computational
efficiency than sampling‐based methods (Section 2.5). A broadly applicable method is
Polynomial Chaos Expansion (PCE) which is based on approximating the response function
of a stochastic system with orthogonal polynomial basis functions. Implementing the
method requires estimation of the polynomial basis function coefficients, for which several
approaches exist. Compared to MC simulation, PCE can offer significantly higher
computational efficiency with accurate estimation of statistical moments. As such, PCE
offers the potential to increase the practical feasibility of tolerance synthesis in complex
assemblies. However, the applicability of PCE is affected by:
the number of design parameters;
smoothness of the system response function;
required moment estimation accuracy;
choice of estimation method for basis function coefficients; and
allowable computational time.
This chapter investigates the feasibility of PCE based UQ in tolerance analysis and synthesis.
The feasibility is assessed through fundamental analysis and the practical evaluation of
tolerance synthesis case study problems. The PIDO tool based tolerance analysis platform
developed in Chapter 4 is extended to addresses a multi‐objective tolerance synthesis
problem in assemblies under loading. The process integration, DOE and statistical data
analysis capabilities of PIDO tools are combined with highly‐efficient PCE based UQ methods
144
for optimization of tolerances to maximize assembly yield while minimizing cost. Two
industry based tolerance synthesis case study problems involving assemblies under loading
are presented. These include:
1. Automotive seating rail assembly consisting of interlocked rail sections separated by a
series of rolling elements. Functional characteristics require that rolling resistance of the
rail assembly be within an ergonomically desirable range. The rolling resistance depends
on the compliance of the rail sections.
2. An automotive rotary switch in which a resistive actuation torque is provided by a spring
loaded radial detent acting on the perimeter of the switch body. Functional
characteristics require that the resistive switch actuation torque be within an
ergonomically desirable range.
5.3 Tolerance synthesis
Manual iteration of tolerance analysis can be slow and ineffective at identifying tolerances
which optimally achieve cost and yield targets. Tolerance synthesis can significantly improve
the manual iteration process by guiding the search for optimal tolerances with an
autonomous optimization algorithm. The aim of tolerance synthesis is to optimally allocate
part tolerances in a product assembly in order to maximize assembly yield and minimizing
tolerance cost, within design and manufacturing constraints. Tolerance synthesis requires
an automated iteration of tolerance analysis: trial tolerances are initially analysed to
estimate the resultant assembly yield and cost associated with manufacturing to the
specified trial tolerances; a new set of tolerances is subsequently selected which aims to
achieve improved yield and cost performance in a successive tolerance analysis iteration.
The process is repeated until desired performance is achieved. Tolerance synthesis requires
the application of a number of analysis and simulation techniques, including: optimization
algorithms (Section 2.6.1); tolerance modelling methods (Section 2.4); UQ methods (Section
2.8) and tolerance cost estimation approaches (Section 2.2.3.1).
A number of tolerance synthesis methods have been proposed with a range of different
analysis techniques (Section 2.6) (Edel 1964; Spotts 1973; Michael 1981; Chase 1988; Wu
1988; Chase et al. 1990; Zhang et al. 1993; Skowronski et al. 1997; Jeang 1999; Cho et al.
2000; Choi et al. 2000; Hong et al. 2002; Ye et al. 2003; Shah et al. 2007).
A review of proposed tolerance synthesis methods has however revealed that, the
limitations of existing tolerance analysis methods in the inability to comprehensively
accommodate the effects of assembly loads (as previously identified in Chapter 4) also exist
145
in the tolerance synthesis domain. Since tolerance synthesis is an automated iterative
extension of tolerance analysis, the common limitations are to be expected.
By extending the tolerance analysis platform developed in Chapter 4 to incorporate
optimization and cost‐tolerance modelling, it is possible to facilitate tolerance synthesis of
assemblies subject to loads within the native modelling environment of CAD/E modelling
tools. Such an extended PIDO tolerance synthesis platform is presented in Section 5.4.
However, a significant obstacle to developing practical tolerance synthesis methods, is the
impractically high computational cost associated with traditionally applied MC based UQ
methods, in combination with computationally expensive assembly tolerance models
(Section 2.6). Research efforts have focused on reducing the overall computational cost, by
decreasing the solution time of the associated tolerance model with simplifying assumptions
(Hong et al. 2002; Singh et al. 2009). The introduced assumptions can however limit model
fidelity and solution accuracy. However, alternative UQ methods with significantly higher
computational efficiency than sampling based methods have recently seen extensive
development (Section 2.5.3). Investigation of the utility of these methods in tolerance
analysis and synthesis has however been limited. A particularly efficient and attractive
analytical UQ method is Polynomial Chaos Expansion (PCE) however its effectiveness in
tolerance analysis and synthesis is unknown. This opportunity to significantly reduce the
cost of UQ in tolerance analysis, and thereby increase the practical feasibility of tolerance
synthesis in complex mechanical assemblies, is investigated in Section 5.7 and utilised in the
developed PIDO tolerance synthesis platform.
5.4 PIDO based tolerance synthesis
A PIDO tool based tolerance synthesis platform is presented in Figure 5.1. Tolerance
synthesis of assemblies subject to loading is enabled by an extension of the PIDO based
tolerance analysis platform developed in Chapter 4 (Section 4.4). The tolerance synthesis
platform implemented procedurally with the stages defined below (additional detailed
discussion is presented in the following sections):
1. A tentative set of trial tolerances is initially selected and applied to an assembly model
for tolerance analysis.
2. UQ techniques estimate the statistical moments of assembly KPCs by evaluating the
response of a number of assembly models sampled from the design space defined by
the trial part tolerances.
146
3. The assembly response function is implicitly defined using a CAD and CAE based
parametric model. For non‐trivial assemblies (particularly assemblies under loading) the
assembly response is typically too difficult to define explicitly using an analytical model.
4. The assembly yield associated with the trial tolerance set is estimated from the
statistical moments of assembly KPCs. The associated tolerance cost is estimated using
cost‐tolerance curves.
5. An optimization algorithm guides the search for a set of tolerances which offer superior
performance in achieving both the objectives of maximising yield and minimising
tolerance cost. The estimated assembly yield and tolerance cost are compared to
targets. If yield targets are not achieved, a set of tolerances with increase precision is
selected.
6. The competing objectives of maximizing yield and minimizing tolerance cost result in a
Pareto set of non‐dominated designs rather than in a single optimum solution (Deb
2004). The non‐dominated set offers equivalently optimal performance, as superior
performance for one objective (yield) results in compromise for the other objective
(cost). The search can be terminated when an allowable computational time is reached,
or when the optimization problem has converged (when diversity in the objectives space
becomes rare with subsequent iterations). It is up to the designer to choose the
preferred set, which either favours one objective or offers a balance of both.
The tolerance synthesis platform developed in this work provides a comprehensive
approach to the tolerance synthesis problem by enabling:
Tolerance analysis of assemblies under loading.
Multi‐objective optimization of both yield and tolerance cost.
Large computational cost reduction in UQ enabled by Polynomial Chaos Expansion.
Application of robust DOE and optimization methods.
The elements unique to this research will be described in greater detail in the following
sections.
147
Figure 5.1 ‐ PIDO based tolerance synthesis platform. Extension of the PIDO based tolerance analysis
platform presented in Section 4.4. (Figure 4.2).
5.5 Quantification of quality and cost
Quality is defined by the degree to which a manufactured product achieves target values for
parameters of particular importance to functionality and performance (Section 2.2.2). The
parameters of particular importance are defined as KPCs (Section 3.2.4).
148
The costs associated with the quality of a product are generally attributable to quality
control and quality loss (i.e. a failure to control quality) (Section 2.2.2) (Phadke 1989;
Taguchi 1989; Feigenbaum 2012). In this work, the costs of controlling quality are attributed
to the cost of manufacturing to specified tolerances, and can be modelled using cost‐
tolerance functions (Section 5.5.1). Similarly, the costs of failure to control quality can be
attributed to the number of manufactured assemblies out of specification (for yield less
than 100%) which can be represented by quality loss functions and process capability
indices (Section 5.5.2). Achieving a balance between the costs associated with quality
control and quality loss promotes manufacturing efficiency (Juran 1992).
5.5.1 Cost‐tolerance modelling
Reducing manufacturing variation typically increases manufacturing costs due to demands
for higher precision machinery, increased number of manufacturing steps, and stricter
process control (Feigenbaum 2012). Various cost‐tolerance functions have been proposed
for representing the manufacturing cost to tolerance relationship for a range of
manufacturing processes (Section 2.2.3.1). The exponential function (Equation (5.5)) and
Figure 5.2 (i)) is especially useful due to its applicability to a range of scenarios and will be
applied in this work (Wu et al. 1988; Chase et al. 1990; Dong et al. 1990). The exponential
function represents the tolerance cost, , for a specific tolerance , as:
Where (5.5)
Where:
and define an economically feasible tolerance range.
and define minimum threshold cost and tolerances, respectively.
and are curve fitting parameters derived from experimental data.
Costs are specified in cost units.
The total tolerance cost of an assembly is the sum of the tolerance cost of each
manufacturing process ( ) associated with individual constituent components ( ) (Figure
5.2 (ii)).
149
Cost, g(T) Cost, g(T)
A+g0 High precision finish operation
g3
Standard finish operation
A+g0
g2 Rough operation
g0 Feasible g1
region for g2
Tmin
Tmax
T3 T2 T1 T0
Tolerance, T Tolerance, T
(i) (ii)
Figure 5.2 ‐ (i) Exponential cost‐tolerance relationship, (ii) Chain of cost‐tolerance curves for multiple
manufacturing processes of varying precision (V=3).
5.5.2 Quality loss and process capability
The cost associated with a failure to control product quality can be represented by a quality
loss function (QLF) (Taguchi 1989; Cho et al. 1997) (Section 2.2.3.2 and Equation A.56).
Where µ and σ2 are the mean and variance, respectively, (A.56)
of a KPC with target τ. is a weighting constant.
The quality loss function increases with any deviation of the relevant KPC from its target
value (representing an associated cost). Both the variance and an offset of the mean from
the target value contribute to a loss in quality. The QLF has been successfully applied in a
number of tolerance synthesis problems (Jeang 1999; Cho et al. 2000; Choi et al. 2000; Feng
et al. 2001).
Similar metrics to the QLF are Process Capability Indices (PCI) which measure the
consistency and accuracy of manufacturing process outputs (Section 2.3.4). PCIs compare
the manufacturing process distribution to the specification limits and nominal target values.
Several PCI have been proposed, however a particularly useful index is (Equation A.59)
which measures the ability of a process to achieve a target nominal, , and the specification
limits:
(A.59)
6
The index was developed to capture a similar intent as the quality loss function while
quantifying loss in terms of the intended specification limits. compares the specification
limits to the 6σ limits of the manufacturing process distribution, i.e. 99.73% of the predicted
150
population (which results in of unity). A higher process index indicates higher quality
(Feng et al. 1997; Feng et al. 1999).
is an efficient way of measuring quality loss using a dimensionless metric while giving a
direct indication of the expected yield. will be applied as a metric of quality loss in this
work.
5.6 Yield estimation by uncertainty quantification
An estimate of assembly yield requires that the statistical moments of distributions of the
assembly KPCs be known. These can be estimated by Uncertainty Quantification (UQ)
methods. Uncertainty quantification is the process of determining the effects of stochastic
input uncertainties on the probabilistic response of a system. The probabilistic systems
response is characterized by its probability density function (distribution) defined by four
statistical moments: the mean (μ), standard deviation (σ), skewness (γ) and kurtosis (β). A
number of UQ methods have been demonstrated in literature and can be classified as either
sampling based or analytical (Nigam SD 1995; Lee SH 2009). These alternative UQ methods
have been discussed in Section 2.8.
The application of specific UQ methods depends on the intent and constraints of the
problem under consideration, including:
Complexity of implementation (simulation based methods for instance are typically
trivial to implement in contrast to difficult to implement analytical methods such as
PCE);
Computational cost and available computational budget;
Statistical moments of importance (typically in robust design problems such as tolerance
analysis, mean and standard deviation are of greater interest than higher order
moments.);
Required statistical moment estimation accuracy (for low precision tolerances, higher
moment estimation error may be acceptable);
Input variable distribution (some UQ methods may not be compatible with non‐normal
input parameter distributions);
The model type to be accommodated; where the system response function of the model
is either available analytically (explicit) or defined in a numerical model (implicit).
151
5.6.1 Sampling based UQ methods
Sampling based methods are robust and easily implemented. The most common and robust
sampling based UQ method is Monte Carlo (MC) simulation (Section 2.5.1.1). MC simulation
is based on estimating the probabilistic system response by aggregating the system outputs
for a set of input variables randomly sampled from their specific probability distributions.
MC simulation statistical moment estimates converge to the ideal result proportionally at a
.
rate of where is the number of simulations. As MC convergence is independent
of the problem dimensionality, smoothness of the response function, and type of
probability distribution (Kuo FY 2005). MC simulation typically provides a performance
baseline for assessment of other UQ methods.
Another sampling based UQ method is Latin Hypercube (LHC) simulation (Section 2.5.1.2). In
contrast to MC simulations, LHC uses a constrained sampling approach aimed at avoiding
sample clustering and ensures relatively uniform distribution over the probability density
function range (McKay MD 1979; Keramat et al. 1997). However, the constrained sampling
can introduce unintended correlations among input variables and high order moments and
output probability distribution estimates may not be accurate.
Sampling methods typically show slow convergence and can be prohibitively
computationally expensive when each model evaluation involves lengthy numerical
simulations (Huntington et al. 1998).
5.6.2 Analytical UQ methods
Sampling based methods such as MC have the advantage of being able to accommodate a
broad range of UQ problems as they are not founded on any simplifying assumptions.
However, this may also be a disadvantage as efficiency gains can be achieved if valid UQ
problem simplifications are possible due to, for instance, input parameters with a single
distribution type or smooth response functions. Analytical UQ methods utilize the extra
simplifying information available in such UQ problems to significantly improve convergence
efficiency.
Detailed comparisons of various analytical UQ methods have been presented in the
literature (Haldar et al. 2000; Wojtkiewicz et al. 2001; Eldred et al. 2008; Eldred et al. 2009;
Lee et al. 2009). Based on the outcomes of these comparative studies, Polynomial Chaos
Expansion (PCE) is considered to offer the most potential for application in tolerance
analysis and synthesis problems due to:
Non‐intrusive nature applicable to integration with existing CAD, CAE and PIDO tools;
152
High efficiency and accuracy;
Flexibility in accommodating various input parameter distributions;
Ability to accommodate high dimensionality problems;
Current high interest in the research community resulting in continual performance
improvements of PCE methods.
In this research, the PCE method is integrated into a PIDO based tolerance synthesis
platform, compared with the Monte Carlo method commonly applied in tolerance analysis,
and evaluated on a practical case study. This is achieved by:
A theoretical analysis of the PCE method identifying working principles, implementation
requirements, advantages and limitations (Section 5.7 to 5.7.5).
Establishing of recommendations for PCE implementation in tolerance analysis including
methods of PCE coefficient calculation and approach to error estimation (Section 5.7.6
and 5.7.7)
Novel implementation in PIDO based tolerance synthesis platform (Section 4.4 and
Section 5.4)
Evaluation on practical tolerance synthesis case studies and validation against reference
MC results (Section 5.8 and 5.9).
5.7 Polynomial Chaos Expansion (PCE)
Polynomial Chaos Expansion (PCE) is a method of estimating how input uncertainties in a
stochastic system manifest in its outputs, through representation of the system response
function using orthogonal polynomial expansions in stochastic variables (chaos denotes the
associated concept of uncertainty). The historical formulation of stochastic expansion UQ
methods such as PCE, is founded on the utilization of mathematical concepts such as weak
convergence, orthogonality and projection which are also fundamental to the development
of deterministic finite element analysis methods (Ghanem et al. 2003).
The orthogonal polynomials of PCE are based on a Weiner‐Askey polynomial scheme, which
includes various types of orthogonal polynomials bases that are specifically matched to
particular probability distributions of the stochastic variables (Weiner 1938; Xiu et al. 2003).
A basis is a set of functions whose combination can be used to represent all function in a
given function space. For example, every quadratic polynomial can be
represented as a linear combination of the basis functions 1, and .
For stochastic variables with a normal distribution (as common in tolerancing and quality
control fields – Section 2.3.3) the orthogonal polynomial basis is formed by the Hermite
153
polynomials (Schoutens 2000). The associated density function is the standard normal
(Gaussian) distribution. Weighting functions are applied to the polynomial series basis that
are probability density functions describing the stochastic input variables. Other distribution
types can be accommodated with different weighting function and corresponding
orthogonal polynomial bases (Xiu et al. 2003). The series is theoretically infinite, however it
is truncated in practice. The highest degree of non‐truncated polynomial denotes the order
of the expansion. The coefficients of the polynomial series may be determined with a
number of techniques (Section 5.7.5) and once known, the desired statistical moments of
the systems outputs can be rapidly obtained.
The PCE method offers the potential to be significantly more efficient than sampling based
UQ methods such as MC or LHC simulation, and can show exponential convergence of
of the error in estimating the mean and standard deviation. Furthermore the
method can be applied in a non‐intrusive manner to problems where the system response
function is implicitly defined.
However, despite the fact that the convergence of the MC method is independent of the
number of parameters of the function under evaluation, the computational cost of PCE
methods is dependent on problem dimensionality, smoothness of the system response
function, polynomial order, and the polynomial coefficients calculation method; these issue
will be considered in the following sections.
5.7.1 Unidimensional Polynomial Chaos Expansion – Derivation of moment expressions
For a stochastic variable , with a standard normal distribution, the associated probability
density function can be written as (5.7):
The nth statistical raw moment 〈 〉 (i.e. moment about zero) of a real‐valued continuous
function , is:
(5.8)
The first statistical moment (mean, µ) of a system response function is therefore:
Where is the expected value or population mean (5.9)
(integral of all possible values of a random variable, or any
given function of it, multiplied by the respective
probabilities of the values of the variable)
Similarly, the second statistical moment (variance):
154
(5.10)
The standard deviation can be expressed as:
2
From the linearity of expected values.
(5.11)
The statistical moments can be determined analytically through direct integration if is
known and not prohibitively complex. However, when the system response function is
complex, dependent on many variables, or not explicitly defined, an analytical integration is
not possible. The statistical moments can however be determined through PCE.
Initially, the response function is represented with a finite linear combination (5.12) of
a subset of polynomial basis functions (5.13), i.e.:
Where are the polynomial basis coefficients
≡ (5.12)
and is the maximum order of the polynomial
, ,…, , … Where is a polynomial of degree 0,1,2 … (5.13)
up to a maximum order
Substituting (5.12) into (5.8) and expanding the polynomial series, the nth statistical raw
moment 〈 〉 of the response function can therefore be expressed as:
⋯ (5.14)
The moment expression of (5.14) can be simplified by utilizing the unique and useful
properties of orthogonal functions. For instance, for being normal (Gaussian), there
exists a unique family of orthogonal polynomials called the Hermite polynomials ( ),
which have the following property for the inner product, in the vector space of real
functions:
〈 , 〉 ≡ 0 When (5.15)
The Hermite polynomial series , is described by:
Where 1 (5.16)
155
Expanding the recursive formula of (5.16), the first ten Hermite polynomials can be written
as:
1
1
3
6 3
10 15 (5.17)
15 45 15
21 105 105
28 210 420 105
36 378 1260 945
45 630 3150 4720 ‐ 945
Combining the property of (5.15) with the multinomial expansion of the moment expression
in (5.14), the following simplification is possible:
… … (5.18)
…
…
…
…
… … Due to (5.15) (5.19)
…
…
…
…
(5.20)
From (5.20) it is now possible to explicitly define expressions for the statistical moments of
, in terms of the polynomial basis coefficients , if is orthogonal to the
polynomial basis (e.g. for being Gaussian, is given by the Hermite
polynomials ).
The first statistical raw moment (mean) can be expressed as:
As 1 0
〈 〉 (5.21)
for every i > 0 due to orthogonality
Likewise the second statistical raw moment is:
〈 〉 (5.22)
Also, following from the inner product of two equal functions:
156
〈 , 〉
‖ is the norm squared
≡ ‖ ‖ Where ‖ (5.23)
Therefore:
〈 〉 ‖ ‖ (5.24)
Substituting (5.24) and (5.21) into (5.11), the standard deviation can now be expressed as:
‖ ‖ ‖ ‖ (5.25)
For the Gaussian case, the equation in (5.25) can be further simplified by again utilizing the
orthogonality of the Hermite polynomials and noting the following property:
√2 ! (5.26)
Consequently, with being Gaussian (5.7), the norm squared of the Hermite
polynomials equates to:
1
‖ ‖ 〈 , 〉 ! (5.27)
√2
Substituting (5.27) into (5.25),
The standard deviation can be expressed as:
From (5.21) and (5.28) it is possible to determine the mean and standard deviation if the
polynomial basis coefficients are known.
For example, by substituting (5.17) into (5.12), with Gaussian input variables and Hermite
basis polynomials, the polynomial chaos expansion for a unidimensional, fifth‐order case
1, 5) can be written as:
(5.29)
1 3 6 3 10 15
From (5.21), the mean can then be expressed as . Similarly from (5.28) the standard
deviation is given by 1! 2! 3! 4! 5! .
Methods for calculating the polynomial coefficients ( ) are discussed in section 5.7.5.
157
5.7.2 Multidimensional PCE
When considering a multidimensional UQ problem, it is possible to represent the system
response using a multidimensional polynomial chaos expansion:
, , ,… , (5.30)
, , ⋯,
As in the unidimensional case, the multidimensional expansion is truncated in practice at a
finite order and is applied to finite dimension , i.e:
, , ,…, , (5.31)
, ,
⋯ … … , , ,…,
The total number of terms in a finite multidimensional polynomial chaos expansion is:
! (5.32)
Where is the number of terms in the PCE
! !
Following the same procedure as for the unidimensional case, leads to expressions for the
first and second order moments. Again due to the orthogonality of the polynomial basis, the
integral of all basis functions is zero except for the first constant term.
Hence the mean for the multidimensional PCE case is again given by (5.21).
The expression for the standard deviation for the multidimensional case becomes:
158
The standard deviation for the multidimensional case can be expressed as:
! ! ! ⋯ … … … ! (5.34)
Where … is the highest order of the polynomial corresponding to each …
Assuming that the variables are independently distributed, the multidimensional chaos
expansion is formed by the product of the unidimensional polynomials corresponding to
each stochastic variable.
For example, for a two dimensional, second order case 2, 2) with Gaussian input
variables and Hermite polynomials, the multidimensional polynomials are:
, 1
,
, (5.35)
, 1
,
, 1
Substituting (5.35) into (5.30), the multidimensional polynomial chaos expansion can be
written as:
, , , , (5.36)
1 1
Now, from (5.21), the mean can then be expressed as . Similarly from (5.34) the
standard deviation is given by √ 1 1! 2 1! 11 2! 12 1! 22 2! .
5.7.3 Higher order moments
Expressions for higher order moments of skewness and kurtosis may also be derived
however the analytical expressions become quite complex (Berveiller et al. 2006). As such
higher order moments are often estimated by sampling the PCE. In the scope of tolerance
analysis and synthesis, the interest in UQ is predominantly on evaluation of the low‐order
moments. Furthermore it is important to note that PCE methods are only guaranteed to
converge to analytically exact values of the first two moments (mean and variance) and
estimation of higher order moments may be erroneous (Xiu et al. 2003). This limitation is
not overly restrictive in the tolerancing and quality fields, as interest is typically on the mean
and standard deviation of a product parameter or process.
159
5.7.4 Non‐normal distributions and correlated variables
Variables with non‐normal distributions can be accommodated with the generalized
Polynomial Chaos Expansion (gPCE) method (Xiu 2003). The formulation of gPCE closely
resembles that show in sections 5.7.1 and 5.7.2, however a different choice of orthogonal
polynomial basis and weighting functions is used to accommodate non‐normal variables.
The gPCE approach uses the Wiener‐Askey scheme in which Hermite, Jacobi, Laguerre and
Legendre orthogonal basis polynomials are used to accommodate stochastic variable with a
range of different distributions (variables of mixed distribution types in the same UQ
problem can also be accommodated). Table 5.1 shows the appropriate polynomial basis and
weighting function for various parameter distributions. Additional detail is available in the
literature (Xiu 2003; Wan et al. 2007; Eldred et al. 2008).
Table 5.1 ‐ Generalized polynomial chaos expansion (gPCE) basis and weighting functions for various parameter
distributions (Xiu et al. 2003; Eldred et al. 2008)
Probability density Orthogonal basis Weighting
Distribution
function polynomials function
Normal (Gaussian),
Bounded normal,
Lognormal, Bounded e Hermite e
√
lognormal,
Gumbel, Frechet, Weibull
Uniform,
Loguniform, 1 Legendre 1
2
Triangular
1 1
Beta Jacobi 1 1
Γ 1 Γ 1
2
Γ 2
Exponential Laguerre
Gamma Generalized Laguerre
Γ 1
Another approach for accommodating non‐normal variables is based on transformation
techniques, such as Nataf or Box‐Cox, which transform various distributions into standard
normal type (Box et al. 1964; Armen Der Kiureghian et al. 1986; McRae et al. 1995).
Comparisons between the gPCE and transformation approaches show that gPCE results in
higher moment estimation accuracy, whereas the transformation approach is simpler to
implement (Choi et al. 2004).
If the input variables associated with PCE formulation are correlated, application of the PCE
method requires that they are first transformed into independent uncorrelated variables;
specific details are given in (Berveiller et al. 2006).
160
5.7.5 Methods for calculating PCE coefficients
The polynomial chaos expansion coefficients may be determined by Collocation (Section
5.7.5.1) or Stochastic Projection (Section 5.7.5.2) techniques. All the approaches discussed in
this work can be applied in a non‐intrusive manner to problems where the system response
function is implicitly defined.
5.7.5.1 Collocation
In general, collocation refers to a solution procedure for integral or differential response
functions based on:
1. Defining a number of possible candidate solutions to the response function (such as a
series of polynomials).
2. Evaluating the response function at a number of trial points (referred to as collocation
points).
3. Selecting the solution from the defined candidates which best matches the response
function at the evaluated collocation points.
Point Collocation (PC) is a strategy for determining the PCE coefficients ( ) through sampling
the response function (for the unidimensional case equation (5.12) or for
multidimensional equation (5.31)) for a number of input parameter values. For the
unidimensional case, is sampled with a number of arbitrary collocation points
1
, 2
,… selected with a sampling approach such Monte Carlo or Latin hypercube
sampling (Hosder et al. 2007). The result is a linear system of equations. For example,
sampling the unidimensional PCE (5.12) results in:
⋯
⋯
⋯ (5.37)
⋮ ⋮ ⋮ ⋮ ⋮
⋯
Which can be more conveniently written in matrix form:
f ξ p ξ p ξ p ξ ⋯ p ξ α
f ξ p ξ p ξ p ξ ⋯ p ξ α
f ξ p ξ p ξ p ξ ⋯ p ξ ∗ α (5.38)
… ⋮ ⋮ ⋮ ⋱ ⋮ ⋮
f ξ p ξ p ξ p ξ ⋯ p ξ α
161
For the unidimensional case, with being Gaussian, the terms correspond to the
Hermite polynomials, i.e. where are given by (5.16) and (5.17). A
similar set of equations can be written for the multidimensional case.
As the response function values and basis polynomials terms are known, by
solving the system of linear equations of (5.38) the polynomial chaos coefficients can be
determined (Hosder et al. 2007). PC is essentially a form of regression analysis in that it aims
to determine the relationship between a dependent variable (in this case ) and a
series of independent variables (here ) formulated in an equation in which the
independent variables have parametric coefficients (i.e. α ). The minimum number of
collocation points corresponds to the number of terms in the polynomial chaos expansion
(5.32) however oversampling is generally recommended and oversampling factors of 2
have been suggested (Hosder et al. 2007; Xiu 2010). Oversampling does not change the
number of polynomial coefficients. As such the number of collocation points is given by:
!
Where is an oversampling factor (5.39)
! !
Table 5.2 indicates the number of required sampling points for a given dimensionality and
polynomial order.
Table 5.2 ‐ Minimum number of simulations N required for point collocation based PCE with various expansion
order k, and dimensionality d. Oversampling ratio s = 2 (as recommended in (Hosder et al. 2007)).
! d
! ! 1 2 3 4 5 10 15 20 50
1 4 6 8 10 12 22 32 42 102
2 6 12 20 30 42 132 272 462 2,652
k 3 8 20 40 70 112 572 1,632 3,542 46,852
4 10 30 70 140 252 2,002 7,752 21,252 632,502
5 12 42 112 252 504 6,006 31,008 106,260 6.96E+06
As oversampling is recommended, for 1 the resulting system of linear equations of
(5.38) becomes overdetermined (i.e. there are more equations than unknowns) and can be
solved using the method of least squares.
The method of least squares is based on the selection of polynomial coefficients which
minimize the sum of the squares of the difference (Δ) between the expansion of order and
the response , over an arbitrary set of trial points 1
, 2
,… , e.g. for the
unidimensional case:
, ,… (5.40)
162
Various solution procedures for least squares problems have been documented and can be
found in existing literature (Lawson et al. 1974; Björck 1996). A PCE specific solution to the
multidimensional least squares problem is presented in (Berveiller et al. 2006).
For highly dimensional problems, the point collocation method requires a large number of
model evaluations (Table 5.3). Furthermore, the least squares approach for calculation of
the expansion coefficients may have prohibitively high computational costs (Eldred et al.
2008). Consequently, for large multidimensional problems, calculation of expansion
coefficients based on stochastic projection using sparse grids methods offers superior
performance (Section 5.7.5.4).
5.7.5.2 Stochastic projection
These methods are based on projecting the response function from (5.12) or (5.31) against
each basis function using inner products. The orthogonality between the polynomial
expansion basis functions and the weighting functions simplifies the determination of the
coefficients. This approach is known as Galerkin projection (Xiu 2007). For the
unidimensional case:
, , ‖ ‖ Due to (5.23) and properties of inner products (5.41)
For an individual coefficient, it can be written as:
, , ‖ ‖ (5.42)
Rearranging:
,
(5.43)
‖ ‖
As the denominator in (5.43) is the norm squared and can be solved analytically (as per
equation (5.27)), determining the polynomial coefficients is now possible by integrating over
the bounds of the weighting function which, for being Gaussian and given by
(5.7), the support range is ∞, ∞ :
〈 , 〉 (5.44)
This integration can be carried out numerically using:
Sampling (Hosder et al. 2007)
Complete product grid numeric quadrature (Xiu 2007)
Sparse grid based numeric quadrature (Section 5.7.5.4)
163
Integration through sampling is based on applying techniques such as Monte Carlo (Kalos et
al. 2009) based integration to equation (5.44). Advantages of this approach include
independence of dimensionality, as well as the ability to accommodate integrand functions
which are not smooth. However, as the convergence rates of sampling methods are
generally slow (section 5.6.1) a large number of samples will be required for low error,
limiting the applicability of the approach.
5.7.5.3 Complete product grid quadrature
Numerical quadrature refers to one‐dimensional numerical integration rules. The more
advanced of these are interpolatory rules which sample the integrand function at an
number of selected points, to constructs and integrate a less complex polynomial
interpolation function (Atkinson 2009). The polynomial is constructed over a region , of
the integrand , from monomials (powers of , shown as a set in (5.45)) with non‐
negative coefficients (referred to as weights).
1, , 2, 3, … , (5.45)
Representing by a series of monomials can be achieved by progressively evaluating the
function at a number of points . The points are also referred to as abscissa as they are
typically represented as points along the horizontal axis of a one‐dimension function plot.
Evaluating at two points provides the coefficients of the first two monomials (corresponding
to linear representation of ). Evaluating a third point provides coefficients for three
monomials and a quadratic representation of , i.e.:
c0 +c +c (5.46)
164
Other rules include Gauss quadrature, which use optimal, unequally spaced integration
points with specific weights to increase precision. The specific integration point and weight
selection depends on the form of the integrand, resulting in a class of differently tailored
Gauss quadrature rules; one of which is Gauss‐Hermite quadrature applicable to integrals of
the form:
Where:
(5.48)
is the number of sampling points
are weights
are sampling points
By using specifically weighted interpolating polynomials that are orthogonal to , and
integration points that are the roots of the orthogonal polynomials, it is possible for Gauss
quadrature rules to achieve a precision level of 2 1; that is to exactly integrate all
polynomials up to degree 2 1 using only integration points (for complete derivation
see (Kovvali 2011)). Due to being Gaussian, the polynomials orthogonal to in
(5.48) are the Hermite polynomials (5.17), where the integration rule domain is
conventionally taken as , ∞, ∞ . Since (5.48) is of the form of equation (5.44),
Gauss‐Hermite quadrature can be used effectively in the stochastic projection based
method of determining the PCE coefficients (section 5.7.5.2). Integration of functions with
non‐normally distributed variables, as in a gPCE UQ problem (section 5.7.4), is possible by
applying specifically tailored Gauss–Legendre (uniformly distributed variable), Gauss–Jacobi
(Beta distributed) or Gauss–Laguerre (exponentially or gamma distributed) quadrature rules
(Epperson 2007).
It is possible to extend a unidimensional quadrature rule to a multidimensional integration
problem by taking a product of unidimensional quadrature rules for each dimension. This
multidimensional product rule is formed from a tensor product of unidimensional
quadrature rules , with associated level , for variable 1 … . The level is an
index integer variable which designates the unidimensional quadrature rule in the family of
possible rules. The multidimensional product rule is defined as:
, ⨂…⨂ , … , … , , ,…, , (5.49)
The tensor product of (5.49) is effectively a sum over all possible combinations of the terms
in the unidimensional quadrature rules. An increase in the level is associated with an
increase in the number of integration points and the precision level of the integration
rule; the relationship between these variables is detailed in Section 5.7.5.4.
165
The multidimensional product rule effectively forms a dimensional product grid of
integration points corresponding to all monomial product combination of the variables.
The total number of monomials in a multidimensional product grid is:
For an isotropic product grid where ⋯ the total number of monomials
is:
, (5.51)
For increasing dimensionality, the required number of monomials for a multidimensional
product rule (5.50) can become very large even for a low number of integration points on
the individual variables (and a corresponding precision). For example, for 10 and
… 3, the number of monomials is 59 049. For an isotropic product
grid (5.51), the number of required integration points grows exponentially with the
dimension. As the number of grid points corresponds to the required number of evaluations
of the complex function (in this work that being the typically computationally expensive
implicit CAE model of the assembly under tolerance analysis), for larger multidimensional
problems, complete product grid quadrature becomes very computationally expensive.
However, it has been found that not all the monomials in a complete product grid are
required, and more efficient multidimensional quadrature techniques are available.
5.7.5.4 Sparse grid quadrature
The precision of a multidimensional quadrature product rule is limited by the total degree of
the monomials in the product grid, where the total degree is the sum of the powers of the
component variables in a monomial (e.g. the total degree of the monomial is 8, which
is the sum of the component degrees) (Bungartz et al. 2004). Achieving a desired precision
in each dimension , requires only monomials for which the total degree does not
exceed (Smolyak 1963). Table 5.3 shows the monomials for a 2 product rule with
excess monomials highlighted (excess monomial do not add to precision). As the
dimensionality increases the excess monomials exponentially dominate the product grid.
166
Table 5.3 ‐ Monomials for 2 complete product grid with excess monomials highlighted
Monomials 2, 6
5
4
3
2
1
0 1
Degree 0 1 2 3 4 5
Based on initial work by Smolyak (Smolyak 1963) methods were developed for eliminating
most of the excess monomials by combining low‐order (unidimensional or
multidimensional) complete product grids to form sparse grids (SG) for high‐order
problems, in which the monomial total degree never exceeds a desired precision (Gerstner
et al. 1998; Gerstner et al. 2003).
A sparse grid , of level (for 0) and dimension is a weighted, linear
combination of the tensor product of unidimensional quadrature rules (equation (5.49)).
The sparse grid level is an index variable which designates the sparse grid from the family
of possible grids. The combination is performed according to the Smolyak rule, formally
defined as (Gerstner et al. 1998):
1
, 1 ∙ ⨂…⨂ (5.52)
1 ,1 ,
Where:
is the sparse grid level
, … , is a vector of unidimensional quadrature rule levels in each dimension 1 … .
⋯ is referred to as the product level
With the binomial coefficient operator given by;
1 !
! !
The Smolyak sparse grid construction rule (equation (5.52)) effectively combines multiple
unidimensional quadrature rules into a single quadrature rule. The resultant integration
points are the set of integration points corresponding to the unidimensional rules. The
resultant weights are the unidimensional rule weights multiplied by a coefficient. The
condition of the summation in (5.52), referred to as the selection criterion, means that the
summation is only applied to the terms for which the values conform to the inequality.
Effectively only the product rules ( , ⨂…⨂ , as defined in equation (5.49)) are
combined whose product levels lie between 1 and . The terms which are
167
multiplied with the tensor product in equation (5.49) are referred to as the combining
coefficient as they determine how the tensor product rules are combined together when
forming the sparse grid. The Smolyak combination rule allows each dimension to be treated
independently without requiring that the unidimensional quadrature rules ( ) have the
same domain or weight function.
The number of points in the sparse grid depends on the sparse grid level , the
unidimensional quadrature rule level , the quadrature rule used, and its associated
growth rule, . The growth rule relates the unidimensional quadrature rule level , to the
number of sampling points (often referred to as the order) in the quadrature rule. The
type of quadrature rule used typically depends on the distribution type of the stochastic
variable under consideration. Different distribution types have preferred quadrature rules
(e.g. Gauss‐Hermite quadrature is an effective rule for functions with normally distributed
variables.) Depending on the quadrature rule used, a number of associated growth rules can
be employed which offer a different balance between the number of points (order) and the
precision (highest degree polynomial which can be approximated with a given number of
sampling points). The rules are classified according to:
How the number of points increases with the level.
Whether the sampling points include the interval bounds ( , in equation (5.47)) of
the underlying unidimensional quadrature rules which make up the grid. The terms open
and closed designate the exclusion and inclusion, respectively, of points at the bounds.
Degree of nesting (the number of sampling points from lower level grids which are re‐
used at higher levels to reduce the total number of unique sampling points).
A common, open, non‐linear growth rule with weak nesting (where only the centre point of
the quadrature rule is reused in higher levels) for Gauss‐Hermite quadrature is
2 1. Further details concerning growth rules are provided in specialised literature
(Burkardt 2010).
An example of a level 2 sparse grid in 2 dimensions, based on Gauss‐Hermite quadrature
with a growth rule of 2 1, is shown in Figure 5.3. The construction of the sparse
grid is based on the Smolyak rule, which for 2, 2 gives:
2 1
2,2 1 ∙ ⨂ (5.53)
2 1 ,1 2 ,2
168
The number of required integration points is reduced from 49 for the full product grid (as
given by equation (5.51)), to 17 for the sparse grid, while offering the same precision.
Figure 5.3 ‐ Multidimensional full product and sparse grid Gauss‐Hermite Quadrature with level 2 for 2
dimensions and 2 1 growth rule.
The size difference between sparse and complete product grids becomes greater with
increased dimensionality. Determining the number of points in a sparse grid can be difficult
as the associated procedure varies depending on the growth rule, and its level of nesting.
Specialised literature on the topic provides further details (Burkardt 2010). To demonstrate
one example however, the number of points on an isotropic sparse grid based on Gauss‐
Hermite quadrature with a 2 1 growth rule is given by:
, 1 ⋅2 (5.54)
0
1
Table 5.1 demonstrates the significant difference in size between sparse and full product
grids for Gauss‐Hermite quadrature for higher dimensions and grid levels.
169
Table 5.4 ‐ Number of points required for isotropic sparse grids and full product grids based on Gauss‐Hermite
quadrature rules with growth rule 2 1, for multiple dimensions and grid levels. Precision indicates the
maximum polynomial degree which can be exactly represented by the associated quadrature.
Sparse grid Complete grid
Level Precision
2 1
1 2 3 4 5 10 20 1 2 3 4 5 10 20
0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 5 3 5 7 9 11 21 41 3 9 27 81 243 5.9E4 3.5E9
2 13 7 17 31 49 71 241 881 7 49 343 2,401 16,807 2.8E8 8.0E16
3 29 15 49 111 209 351 2,001 13,201 15 225 3,375 5.1E4 7.6E5 5.8E11 3.3E23
4 61 31 129 351 769 1,471 13,441 1.5E5 31 961 2.98E4 9.2E5 2.9E7 8.2E14 6.7E29
The number of points on the sparse grid may be reduced by using slower, linear growth
rules such 2 1, however this reduces the precision for a given level. For instance the
precision for the non‐linear growth rule 2 1 is 2 3, whereas the linear
growth rule 2 1 has a precision of 4 1. Significant reductions in grid size may
also be achieved by utilizing anisotropy of the system model under analysis (Section 5.7.5.5).
To achieve good accuracy, synchronisation is necessary between the PCE expansion order
(Section 5.7.2) and the quadrature rule used in the sparse grid based spectral projection
approach for determining the PCE coefficients (Section 5.7.5.4). It has been advised that the
PCE expansion order should be equal to at least half of the quadrature rule precision
(rounded down to an integer) (Eldred et al. 2008; Crestaux 2009). For example, the precision
of Gauss quadrature rules is 2 1, (Section 5.7.5.3) which is equivalent to 2
1, where is the quadrature growth rule. For a rule level 1, with a growth rule of
2 1, the precision is 5. Subsequently, the corresponding PCE expansion order
is 2.
The error in the conventional sparse grid implementation is in the order
, where is an indicator of the smoothness of the integrand. It can
be seen that the convergence rate depends only weakly on the dimensionality , but
strongly on the smoothness . As such, sparse grids require a smooth integrand for
accurate results (for smooth integrands where → ∞, exponential convergence is possible)
otherwise they can be liable to integration errors (Bungartz et al. 2004).
5.7.5.5 Anisotropic sparse grids and adaptive PCE
The behaviour of many mechanical systems is often more sensitive to certain parameters
than others. Such system anisotropy can be utilized in a sparse grid formulation by using a
different unidimensional quadrature rule level , for different grid dimensions 1 … .
Smaller rule levels for less sensitive dimensions require a reduced number of sampling
170
points; effectively reducing the total number of points used in the sparse grid (and
subsequently reducing the number of system model evaluations). The Smolyak sparse grid
construction rule (5.52) can facilitate the use of anisotropic sparse grids through
modifications to the selection criterion, or the weighting coefficient (defined in section
5.7.5.4) (Burkardt 2010). If the nature of model anisotropy is not known, adaptive methods
can be used which progressively increase the rule level in selected dimensions, based on the
associated contribution to the system outputs (Jakeman et al. 2011). Alternatively the
problem may be considered isotropic, in which case the adaptive approach may
progressively increase the general sparse grid level and monitor the convergence in the
moments of the system output.
Adaptive methods are based on sensitivity analysis (Weirs et al.), a posteriori error
estimates (Liu et al. 2011), decay rate of the polynomial coefficients (Foo et al. 2008), or are
constructed based on the generalized dimension‐adaptive sparse grid approach (Gerstner et
al. 2003). The generalised sparse grid method is especially effective at determining the
dimensions and interactions that contribute significantly to the system output variability,
and is considered the superior approach. However, additional techniques have recently
been proposed which offer further performance improvements (Jakeman et al. 2011)
Anisotropic sparse grid quadrature is an extensive topic, further consideration of which is
not feasible within the scope of this work. For a more detailed examination interested
readers are referred to specialised literature (Gerstner et al. 1998; Bungartz et al. 2004;
Burkardt 2010; Jakeman et al. 2010; Jakeman et al. 2011; Jakeman et al. 2011)
5.7.6 Recommendations for calculating PCE coefficients
As demonstrated in the above sections, the applicability of each PCE coefficient
determination method depends on the dimensionality of the problem, as well as the order
of the PCE expansion; both of which are dictated by the UQ simulation requirements.
In general it is recommended that a sparse grid PCE UQ approach be adopted in tolerance
analysis. This is due to superior efficiency (the least number of required model evaluations)
in the presence of even relatively low number of parameters (as evident from Table 5.2 and
Table 5.4.).
It is important to note however that the number of points (corresponding to model
evaluations) required in sparse and complete product grid approaches strongly depends on
the selected growth rule (Section 5.7.5.4). In this work an open, non‐linear Gauss‐Hermite
quadrature growth rule with weak nesting of 2 1 was used. Although this growth
171
rule is highly efficient with low polynomial expansion order problems ( 3), other growth
rules with higher degrees of nesting may be more efficient for higher expansion orders. In
high order problems, Point Collocation or Complete Product quadrature may be more
efficient for low dimensionality problems 5 . Such low dimensionality is however not
of high interest in practical tolerance analysis. However, in such cases approximate
recommendations for choice of PCE coefficient determination method are presented in
(Eldred et al. 2008).
5.7.7 PCE error estimates
PCE possess exponential convergence guarantees of for estimates of the mean and
variance, provided that the basis functions are matched to the stochastic variable
distributions (Section 5.7.4) and the system response function is smooth (Xiu et al. 2003).
Since PCE is based on the representation of a system response function with an infinite
polynomial expansion that is truncated in practice, the truncation can lead to an
approximation error. The error in the PCE method can be refined in practice by increasing
the order of the polynomial expansion. However it is important to note that selecting a
PCE expansion order that is excessively high may lead to over‐fitting of the expansion
(Congedo et al. 2011).
Furthermore, when estimating the PCE coefficients using the spectral projection approach
based on sparse grid quadrature (Section 5.7.5.4), accurate estimation of the moments
depends on both the truncation error as well as the quadrature error associated with the
sparse grid method. However, if the level of sparse grid quadrature is sufficiently high, the
associated quadrature error can be negligible compared to the truncation error. To achieve
a comparatively low quadrature error the PCE expansion order should be equal to half of
the quadrature rule precision, (rounded down to an integer) (Section 5.7.5.4) (Eldred et al.
2008; Crestaux 2009); this recommendation is applied in this work.
A robust indication of error in PCE moment estimates can be obtained with a comparison to
a reference MC sampling based moment estimate. Although MC sampling requires a
comparatively large number of system evaluations, MC based moment convergence is
independent of the problem dimensionality, smoothness of the response function, and type
of probability distribution (Section 5.6.1). As such, a MC moment estimate is robust error
indicator, despite a potentially high computational cost. Increasing the efficiency of the
quantification of the error associated with PCE methods is an active area of research by PCE
investigators (Debusschere et al. 2005; Congedo et al. 2011; Archibald et al. 2012).
172
However, as the associated efforts have not reached a high level of maturity, an MC
reference based approach will be applied in this research.
5.8 Case study 5.1
5.8.1 Problem definition
Case study 5.1 is an automotive tolerance synthesis problem in which product functionality
is characterized by the compliance of part geometry due to internal equilibrating forces.
Automotive passenger seats are required to accommodate anthropometric variation of
users while meeting safety standards under crash scenarios (Leary et al. 2011). Fore and aft
adjustment of seat position is achieved with a rolling rail assembly consisting of interlocked
rail sections separated by a series of rolling elements (Figure 5.4). The rail sections are
preloaded elastically by an interference fit upon assembly. Manufacturing variation in the
geometric parameters of the rail section affects the magnitude of the rolling element
contact force and consequently the rolling effort of the rail assembly. It is required that the
contact force be sufficiently high to avoid chatter in the rail assembly, while being
sufficiently low to allow the rail to move without excessive effort.
Due to these conflicting requirements, contact force is a KPC of significant importance.
Minimising the effect of manufacturing variation on elastic rail preload is required to
achieve competitive quality, cost and development time objectives. This case study presents
a problem of tolerance synthesis in a rail assembly with the objectives of maximising
assembly yield (number of rail assemblies which comply with rolling resistance requirement)
and minimising the cost of tolerances. The rail assembly selected for the analysis is Rail A
(Figure 3.10 (i)) as previously considered in the benchmarking analysis presented in Section
3.4. The benchmarking study compared the sensitivity to manufacturing of various rail
section profile designs. Rail assembly A was found to offer high sensitivity to variation,
attributable to a large number of section folds and close proximity of rolling elements
(Section 3.4.8). The selected rail assembly had the highest number of design parameters
among the design alternatives (22 parameters associated with bend angles and radii), and
was one of the most challenging for modelling correct rolling element location within the
assembly (described further in 5.8.3). Subsequently it presented the most demanding case
study both for the PCE UQ method (which is dependent on dimensionality ‐ Section 5.7) and
for required tolerance and CAE modelling. As such, outcomes of this case study would
provide a worst‐case estimates of:
173
Performance expected in the simulation capabilities of the tolerance synthesis platform
developed in this work (for this application); and,
the manufacturing tolerances which would be required to achieve desired yield in the
most variation sensitive rail assembly design. This was a point of high interest to the
industry partner.
Since other rail assembly designs considered in Section 3.4 are less demanding in both of
the above aspects, a successful outcome of this case study is expected to ensure similar
success if the other rail assembly designs were also subject to tolerance synthesis with the
developed platform.
(i) (ii)
Figure 5.4 ‐ (i) Automotive seat and rail assembly (black) (ii) seat rail assembly section view including die‐press
folding sequence for upper and lower rails
Variation in rolling resistance can be estimated from the rolling element contact force in the
rail section assembly, where the resistance of the rolling element depends on the product of
the rolling element contact force and the associated coefficient of rolling resistance
(Williams 1994). Each rail assembly has a total of 14 spherical rolling elements, and the
coefficient of rolling friction for a physical prototype rail assembly has been experimentally
determined to be 0.023.
The following attributes define the design requirements and manufacturing process of the
rail assembly:
A nominal rolling effort force of 35N for a complete seat rail assembly has been
identified as desirable with the specification limits set at 35 ± 8 N (corresponding to
= 1, i.e. 99.7% assembly yield). This rolling effort is for a complete lower seat frame
assembly consisting of two seat rail assemblies, with each rail set providing half of the
total resistive force.
174
The rail sections are manufactured from steel sheet that is progressively folded in die‐
presses into the desired profile. The required folding sequence for upper and lower rails
is indicated in Figure 5.4 (ii). Each folding stage is associated with specific variation in the
nominal geometry, in particular the bend angle and bend radii. These variations are
quantified in Section 5.8.2. Variation in the sheet thickness is considered negligible in
the scope of this work.
Reduced variation in each fold is possible through stricter control of allowable tool wear,
but incurs a cost penalty.
Controlling variation in the folds is associated with varying level of difficulty on account
of:
o the need to use less precise multi‐part dies (which are also fundamentally more
expensive than single‐part dies)
o a reduction in freedom to orient the workpiece within the die
o the need to carry out free‐end type folds (a fold which is not fully enclosed within a
die) which become difficult to control due to spring back effects.
Consequently, the cost‐tolerance relationships associated with a fold process is
dependent on the difficulty level as well as the allowable tolerance. The folding stages of
Figure 5.4 (ii) have been classified in Table 5.6 according to the level of difficulty in
controlling associated variation.
Two alternative die‐pressing processes are available, a standard process suitable for
standard precision tolerances and a high precision process. The high precision process
can offer a lower tolerance cost than the standard process for high precision tolerances.
However, due to increased set‐up times, slower operation and higher capital cost, the
high precision process imposes a higher cost for standard and low precision tolerance
values. There is a limited region of overlap where the two processes offer a similar cost
tolerance characteristic.
Cost‐tolerance curves which capture the precision attributes of the die‐pressing
processes for the rail section bend angle and bend radii parameters are presented in
Figure 5.7 and Figure 5.8. The curves were developed in consultation with an industry
partner and are based on empirical experience in the analysis of the die‐pressing
processes used to manufacture similar rail sections. The curves indicate the cost penalty
in terms of the standard deviation of the associated part parameter. Each bend may be
carried out on either the standard or high‐precision process which is determined by the
more economical choice for a required tolerance level.
175
The case study objective is to specify optimal process capabilities for the rail bend angle and
radii, such that the following objectives and constraints are addressed (Table 5.5).
Table 5.5 ‐ Case study 5.1 objectives and constraints.
Objective Description Constraint
Maximize of Maximize the number of The minimum required assembly yield is
assembly KPC assemblies conforming to the 99.7% ( = 1)
rolling effort (KPC) specification
requirements
Minimize total Minimise the total cost of the Maximum allowable tolerance cost is
tolerance cost required part tolerances based on 8000 cost units (the cost constraint was
cost‐tolerance curves (Figure 5.7 set high as it was not certain what
and Figure 5.8) tolerance cost would be required to
achieve the required yield)
A statistical tolerance analysis simulation was conducted on a numerical model of the
assembled rails with bearings under preload, intrinsically capturing the associated assembly
response function and quantifying rolling effort variation.
5.8.2 Variation in rail geometry and tolerance costs
To obtain an indication of the variation in production rails, metrological measurement was
conducted on rails currently produced by an industry partner with standard precision
folding processes (Figure 5.5). The metrological measurements were conducted to provide
insight into the expected yield for the case study rail assembly (in Figure 5.4 (ii)) if it were to
be manufactured using the same manufacturing processes as the measured rail (Figure 5.5).
Furthermore, the manufacturer of the measured rails has found that the variation in rolling
effort in production assemblies is unsatisfactorily high. By applying the tolerance synthesis
platform developed in this work to a future production designs, the required tolerances and
associated cost which achieve an acceptable rolling effort target can be identified.
A rigid mounting fixture was designed, constructed and placed within a Coordinate
Measurement Machine (CMM) for metrological assessment (Figure 5.5 (i) and (ii)). A group
of 24 upper and 24 lower rails (sampled from different production batches to accommodate
batch‐to‐batch variation) was measured at a number of perimeter points at 18
longitudinally spaced cross‐sectional positions (Figure 5.5 (iii)) (resulting in 432 profile
sections measurements for both upper and lower rails). The measurement points were used
to determine the bend angle and radii for a number of bends within each cross‐section.
Figure 5.5 (iii) and (iv) show an end view of sample upper and lower rails with measured
angles and radii (measured radii shown as superimposed fitted circles).
The resultant overall standard deviation for the measured folds is shown in Table 5.6 along
with classification of the level of difficulty in controlling associated variation. The
176
measurements were conducted until the influence of additional samples on the overall
standard deviation was less than 2% (Figure 5.6). The resultant variation was combined into
averaged values of standard deviation in bend angle and bend radii associated with both
low and high difficulty folds (Table 5.7).
The combined standard deviation values were subsequently used to calibrate the cost‐
tolerance curves for the folding processes (Figure 5.6 and Figure 5.7). The measured rail
sections were manufactured using standard precision folding processes involving both low
and high difficulty folds. The cost associated with achieving standard precision was
considered approximately equal for all folding stages due to the similarity of tooling, set‐up
and quality control specifications. The cost was set at the median of the standard process
cost unit scale on advice from the industry partner. This reflects the position of the current
process precision within the cost‐tolerance relationship. However, the precision associated
with the equal cost (measured in standard deviation) varied between folding stages as
different fold characteristic result in varying levels of difficulty in controlling variation.
Table 5.6 ‐ Standard deviation in measured rail folds. Classification of the level of difficulty in controlling
associated variation for both the case study rail (Figure 5.4 (ii)) and the measured rail (Figure 5.5 (iii))
Case study rail Measured rail
Lower Rail Upper Rail
Folding Fold difficulty
Standard deviation Standard deviation
Stage Fold Fold
Lower Upper
difficulty Angle[deg.] Radii[mm] difficulty Angle[deg.] Radii[mm]
Rail Rail
1 Low Low Low 0.191 ‐ Low 0.202 0.038
2 Low Low Low ‐ 0.024 High 0.530 ‐
3 High High Low ‐ 0.078 High ‐ 0.221
4 High High High ‐ 0.118 High ‐ 0.205
5 High ‐ High ‐ ‐ ‐ ‐ ‐
Table 5.7 ‐ Combined averaged standard deviation associated with
low and high difficulty folds for measured rail.
Folding Bend angle standard Bend radii standard
difficulty deviation [deg.] deviation [mm]
Low 0.197 0.047
High 0.530 0.181
177
CMM probe Calibration sphere
Lower rail
sample Upper rail sample
Rail mounting
post
Rail jig
base plate
CMM support
base
(i)
(ii) (iii)
(iii) (iv) (v)
Figure 5.5 ‐ Measured rail assembly (i) CMM mounting jig and sample rails under measurement
(ii) general jig dimensions (ii) section measurement locations
(iii) section view including folding sequence for upper and lower rails
(iv) sample upper rail variation (v) sample lower rail variation.
178
Figure 5.6 ‐ Influence of additional samples to the change in overall standard deviation for a total of 24
measured rail sets. Where
Figure 5.7 ‐ Cost‐tolerance curves for rail bend angles for varying levels of variation control difficulty. The
process curves are plotted only within the feasible limits of the associated process.
179
Figure 5.8 ‐ Cost‐tolerance curves for rail radii angles for varying levels of variation control difficulty. The
process curves are plotted only within the feasible limits of the associated processes.
5.8.3 Simulation models
The effect of variation in rail geometry on rolling effort has been estimated by the
integration of two independent numerical models:
CAD model defining rolling element position, and
Finite Element (FE) model estimating bearing contact force due to internal loading.
Due to variation in rail geometry, the rolling element size and position which correctly fits
within the spacing between assembled rails will vary. Identifying the size and position of the
rolling elements required the construction of a comprehensive parametric CAD model,
developed using the CATIA software. Due to limitations associated with parametric
constraint modelling, the CAD model identified two possible solutions, only one of which is
feasible (Figure 5.9 (ii)). The feasible solution was identified by post‐processing the CAD
output, prior to exporting the correct rolling element position and diameter values to a
second, parametric FE model of the rail assembly. ABAQUS FE modelling software was used
to simulate the rail deflection due to an interference fit with the rolling elements (Figure 5.9
(iii)). The model was constructed to consider one‐half of the symmetric rail profile. The
nominally fitting ball for the rail assembly was oversized by 0.1mm in diameter to be
representative of the assembled interference fit. The resultant contact force was integrated
over the contact surfaces. The average individual simulation time associated with the
integrated CAD and FE models was approximately 80 seconds on a quad core 3.2 GHz CPU.
180
(i) (ii)
FE Model Details
Linear quadrilateral plane stress elements
Element
(CPS4I)
type:
Xrotation = 0, Y rotation = 0, Zdisplacement = 0
Number of
12500 (average)
elements:
Contact
Surface to surface contact with limited sliding
constraints:
Boundary Constraint 1: Fixed
constraints: Constraint 2: Vertical translation only
(iii)
Figure 5.9 ‐ (i) Rail section parameters (alpha numeric label designates stochastic variable – see Table 5.8)
(ii) CATIA model for determining ideal ball size from possible scenarios
(iii) FE model details showing deformation due to interference fit of rolling elements.
All dimensions in mm.
Two PIDO tools were interfaced with the CAD and FE rail models according to a newly
developed PIDO tolerance analysis platform (Figure 5.10) (Mazur et al. 2011). UQ was
conducted using DAKOTA (Adams 2011) and optimization was carried out using ESTECO
modeFRONTIER (Figure 5.10). All part parameter distributions were Gaussian. Tolerance
synthesis was carried out according to the platform presented in Section 5.4 and consisted
of the following stages:
1. Trail standard deviations (i.e. tolerances) for bend angles and bend radii were selected.
Initial selection was based on uniformly distributed samples within the design space,
subsequent selection was determined by the optimization algorithm defined in Section
5.8.5.
181
2. The tolerance cost for each bend angle and bend radii pair was calculated for both the
standard and high‐precision processes. The more economical process was then selected
and the total assembly tolerance cost was calculated. If the cost was infeasible
(violating maximum cost constraints), a new trial set of tolerances was selected.
3. Trial tolerances were passed to the parametric CAD model to identify the size and
position of the correctly fitting rolling elements.
4. Trial standard deviation and correct rolling element size and positions were passed to
the UQ tool, which initialised a sparse grid based PCE simulation. For each UQ sampling
point, the simulation consisted of:
a. The nominally fitting rolling elements were oversized by 0.1mm in diameter which
is representative of the interference fit within the rail assembly. A contact
simulation was subsequently initialised in which the interference force deformed
the rail.
b. Resultant peak contact forces were integrated over the contact area.
c. Complete rail assembly rolling effort was estimated from the contact forces.
5. Moment estimates for the rolling effort KPC were returned from the UQ tool and
was calculated.
6. Objective function fitness was assessed by the MOGA optimization algorithm and new
trial standard deviations were generated (return to step 1).
The optimization was terminated at the iteration limit of 45 designs (Section 5.8.5).
182
Figure 5.10 ‐ PIDO tolerance synthesis workflow for Case study 5.1.
5.8.4 UQ strategy
Sparse Grid based PCE was incorporated into the PIDO interface for UQ in the rail assembly
model. The SG level was selected after comparing PCE moment estimates of rolling effort
for the initial design (Section 5.8.7 and Table 5.8) for progressively larger SG levels. An
isotropic SG level of 0 was too low to provide reliable moment estimates (corresponds to a
single model evaluation). Levels 1 and 2 required 45 and 1101 model evaluations,
respectively. The differences between the level 1 and level 2 estimates of mean and
standard deviation were approximately 0.5% and 2%, respectively.
% Where the subscripts designate the two different moment (5.55)
estimates under comparison. The estimate based on the larger
% number of model evaluations was taken as estimate 2. (5.56)
An anisotropic sparse grid based on the generalized adaptive approach required 85 model
evaluations and showed differences of approximately 0.8% and 2.3%, between
the isotropic level 1 grid estimates. The PCE based moment estimates were compared
against a MC estimate of 3000 samples. The differences between the SG level 1 and MC
estimates of mean and standard deviation are approximately 3% and 5%,
respectively. Based on the small differences between SG based PCE moment estimates at
183
different grid and adaptivity levels, as well as the small difference between the MC and SG
level 1 estimates, an isotropic SG level of 1 was selected for use in tolerance synthesis in
order to reduce the number of model evaluations.
5.8.5 Optimization strategy
5.8.6 Assumptions
The conducted analysis was subject to a number of assumptions in order to allow for
reasonable scope of analysis within a limited analysis time budget:
The plane stress FE model only considers the two‐dimensional cross‐section of the rail at
the rolling element contact location. This simplification results in the estimated
magnitude of contact force being based on deformation of the full rail length, rather
than point contact. However in reality, the contact scenario is that of a sphere and a
surface. The approximation was used to limit the FE model simulation time to a
practically manageable size, due to the significant complexity of the more realistic
scenario.
Variation in rail sections was assumed to be equal on either side of the axis of
symmetry.
No variation in linear dimensions or material thickness was considered as the industry
partners advised that these effects were negligible in comparison with those considered
in this work.
The emphasis of this work is on demonstrating the developed PIDO based tolerance
synthesis platform based on utilising existing modelling tools. If greater realism is desired,
the assumptions may be overcome by more sophisticated models (with greater
computational cost) without invalidating the demonstrated applicability of the presented
methodology for tolerance synthesis scenarios.
184
5.8.7 Simulation results and outcomes
Simulation results are shown in Figure 5.11 and Table 5.8. The standard deviations of the
bend angle and bend radii for the initial case study designs were set to values corresponding
to the measured rails (Table 5.7). The yield and tolerance cost, which would be achieved if
the case study rail assembly were to be manufactured with the same process standards as
the measured rails, were subsequently estimated. The resultant yield of = 0.12 was
unsatisfactorily low. The tolerance synthesis platform was able to identify Pareto optimal
designs with higher yield (Figure 5.11), however at a significant penalty to the total
tolerance cost. This cost penalty indicates that the manufacturing precision of the measured
rail assembly in not sufficient to meet the desired yield targets of the case study rail. The
rolling effort of the case study rail assembly is particularly sensitive to variation in rail profile
parameters that are identified with a comparatively high in Table 5.8 (e.g.
approximately greater than 4). The associated high tolerance cost could be reduced by;
folding process changes which decrease the tolerance cost of high profile parameters;
or re‐design of the rail profiles to achieve rolling effort yield requirements without requiring
high rail parameters.
From the identified Pareto optimal designs (Figure 5.12) the lowest cost design (Design
#114) was selected as the preferred candidate. The selected design offers the lowest
tolerance cost while exceeding the yield requirement of 1 by 11%. The higher yield
allows for conservatism in the estimated results.
The total number of designs evaluated was 150 (Section 5.8.5). Each evaluation of the rail
assembly CAD and FE models required approximately 60 seconds on a quad core 3.2 GHz
CPU (Section 5.8.3). The bulk of the analysis time is attributed to the FE solver with process
integration overheads amounting to an additional time of approximately 10 seconds per
design. The total number of model evaluations was 6750 (i.e. 150 optimization runs, each
with a UQ analysis of 45 model evaluations) resulting in a total simulation time of
approximately 5.5 days. With additional time resources the selected design could be
subjected to a local refinement to explore the objectives space within the vicinity of the
selected optimum design with greater resolution.
If UQ was conducted with traditional sampling‐based methods, such as MC, with a relatively
small sample size of 1000, the total simulation time would be approximately 121 days. This
alternative is prohibitively impractical, whereas the sparse grid based PCE method provides
a feasible alternate by reducing the computation time by a factor of 22.
185
The final design (Design ID#114) was validated against a MC reference estimate of 3000
samples. The differences between the PCE and MC estimates of mean and standard
deviation was approximately 3% and 4%, respectively. This difference is
considered negligible.
Figure 5.11 ‐ Objectives space of tolerance synthesis for Case study 5.1.
186
Table 5.8 ‐ Rail assembly parameters and associated variation for initial design and selected optimum.
Initial design (ID #1) Optimised design (ID #114)
Folding difficulty
Spec. Limits +/‐
Process (Standard or
Process (Standard or
Component
High‐precision)
High‐precision)
Nominal
Tolerance
Tolerance
Max.
Min.
Cost
Cost
Parameter
Cpm
Cpm
µ σ µ σ
a34 [deg] L 90 1 89 91 90 0.197 1.69 S 50.6 90 0.023 14.49 H 121.9
a37 [deg] L 90 1 89 91 90 0.197 1.69 S 50.6 90 0.026 12.82 H 119.0
a44 [deg] H 72.9 1 71.9 74 73 0.530 0.63 S 49.8 73 0.136 2.45 H 241.2
a57 [deg] H 79.2 1 78.2 80 79 0.530 0.63 S 49.8 79 0.069 4.83 H 395.4
Upper Rail
a61 [deg] H 84.7 1 83.7 86 85 0.530 0.63 S 49.8 85 0.093 3.58 H 329.1
r1 [mm] L 2.8 0.1 2.7 2.9 2.8 0.047 0.71 S 51.8 2.8 0.034 0.98 H 62.1
r2 [mm] L 2.8 0.1 2.7 2.9 2.8 0.047 0.71 S 51.8 2.8 0.043 0.78 H 57.6
r3 [mm] H 2.8 0.1 2.7 2.9 2.8 0.181 0.18 S 49.8 2.8 0.031 1.08 H 622.9
r65 [mm] H 4.7 0.1 4.6 4.8 4.7 0.181 0.18 S 49.8 4.7 0.025 1.33 H 439.5
r195 [mm] H 1.5 0.1 1.4 1.6 1.5 0.181 0.18 S 49.8 1.5 0.051 0.65 H 272.5
a81 [deg] H 135 1 134 136 135 0.530 0.63 S 49.8 135 0.093 3.58 H 329.1
a83 [deg] H 135 1 134 136 135 0.530 0.63 S 49.8 135 0.092 3.62 H 331.6
a90 [deg] L 89 1 88 90 89 0.197 1.69 S 50.6 89 0.079 4.22 H 83.9
a102 [deg] L 90 1 89 91 90 0.197 1.69 S 50.6 90 0.055 6.06 H 96.4
a103 [deg] L 90 1 89 91 90 0.197 1.69 S 50.6 90 0.033 10.10 H 112.6
Lower Rail
a104 [deg] H 91 1 90 92 91 0.530 0.63 S 50.6 91 0.043 7.75 H 484.9
r80 [mm] H 2.8 0.1 2.7 2.9 2.8 0.181 0.18 S 49.8 2.8 0.043 0.78 H 331.4
r89 [mm] L 2.8 0.1 2.7 2.9 2.8 0.047 0.71 S 51.8 2.8 0.005 6.67 H 228.1
r92 [mm] H 3.3 0.1 3.2 3.4 3.3 0.181 0.18 S 49.8 3.3 0.047 0.71 H 300.2
r96 [mm] L 3.3 0.1 3.2 3.4 3.3 0.047 0.71 S 51.8 3.3 0.006 5.42 H 207.6
r141 [mm] H 2.8 0.1 2.7 2.9 2.8 0.181 0.18 S 49.8 2.8 0.105 0.32 H 98.0
r198 [mm] L 1.5 0.1 1.4 1.6 1.5 0.047 0.71 S 51.8 1.5 0.008 4.17 H 179.4
Total Rolling
effort
Assembly
[N] 35 8 27 43 32.42 22.19 0.12 34.36 2.31 1.11
Cost
[Cost
1110 5445
units]
187
5.9 Case study 5.2
5.9.1 Problem definition
The case study presents a significant extension of the tolerance analysis problem presented
in case study 4.2 (Section 4.6) in which product functionality is defined by both external and
internal loading (friction and multi‐body dynamics). The case study is extended to address a
tolerance synthesis problem in which an optimal set of part tolerances is identified,
minimising manufacturing cost while maximising assembly yield. The product under
analysis is a rotary switch assembly used in automotive applications. Positional restraint and
a desired resistive switch actuation torque are provided by a spring loaded radial detent
acting on the perimeter of the switch body (Figure 5.12). The cylindrical detent is located in
a positioning sleeve within which a helical compression spring biases the cylindrical detent
against the switch detent ramp faces. The peak resistive torque is a KPC of the assembly
and depends on:
part geometry;
internal forces due to part acceleration;
external forces, including the spring force;
contact forces between components;
friction coefficient between components in contact.
A sufficient resistive torque is required to provide ergonomically and functionally adequate
positional restraint with a positive impression of product quality. Excessive variation in the
peak resistive torque of manufactured switch assemblies has a negative impact on
perceived product quality. The 8 design variables considered in the simulation are shown in
Figure 5.12 and Table 5.10.
The product requirements define a series of constraints and objectives:
A nominal peak resistive torque of 75 Nmm has been experimentally identified as
desirable for the intended application with specification limit set at 75 ± 7 Nmm
(corresponding to = 1)
The rotary switch and positioning sleeve are injection moulded polymer components. The
radially acting cylindrical detent is machined mild steel. The steel spring is manufactured
on dedicated wire coiling machinery (Wahl 1963; Wood 2006; Mazur et al. 2011).
The different materials and manufacturing processes result in specific cost‐tolerance
characteristics for the associated part parameters. Cost‐tolerance curves for each part
parameter were specified in consultation with an industrial partner and are presented in
188
Figure 5.13. The estimated curves indicate the expected cost penalty in terms of the
standard deviation of the associated part parameter. The curves are based on a
hypothetical estimate due to a lack of empirical data defining the cost‐tolerance
relationship for the manufacturing processes associated with the assembly components.
Lack of process specific empirical data is common problem with cost‐tolerance modelling
due to broadly varying characteristics of manufacturing process economics (Dong et al.
1994; Hong et al. 2002). Although cost‐tolerance relationships could be defined with
dedicated investigation of the industry partners and supplier manufacturing processes, it
demands extensive commitments beyond the scope of this research project. As such
hypothetical cost‐tolerance curves were established which represent the expected
relative difficulty in controlling variation in the various assembly parameters.
The case study objective is to specify optimal process capabilities for part parameters, such
that the following objectives and constraints are addressed (Table 5.9).
Table 5.9 ‐ Case study 5.2 objectives and constraints
Objective Description Constraint
Maximize Maximize the number of assemblies The minimum required assembly yield is
of assembly KPC conforming to the peak resistive torque 99.7% ( = 1)
(KPC) specification requirements
Minimize total Minimise the total cost of required part Maximum allowable tolerance cost is 515 cost
tolerance cost tolerances dictated by part parameter units. This corresponds to the cost of the best
specific cost‐tolerance curves (Figure design identified in prior work through
5.13) manual allocation of tolerances (Original
identified in Section 4.6.5.2 and shown for
reference in Table 5.10 and Figure 5.15).
Figure 5.12 ‐ Rotary switch and spring loaded radial detent assembly model used in Case study 5.2
(Note: Linear dimensions in mm. Variation in non‐enclosed dimensions not considered in simulation)
189
Figure 5.13 ‐ Cost‐tolerance curves for part parameters of radial detent assembly
5.9.2 Simulation model and optimization
A parametric numerical model of the switch assembly was constructed using MSC ADAMS
Multi‐Body Dynamics modelling software (Figure 5.12) (Section 4.6.4) accommodating the
possible variation within geometric and physical parameters (such as spring pre‐load, spring
stiffness and friction coefficients).
Two PIDO tools were interfaced with MSC ADAMS software according to the developed
PIDO tolerance synthesis platform (Section 5.4). UQ was conducted using DAKOTA (Adams
2011) with process scheduling and optimization carried out using ESTECO modeFRONTIER
(Figure 5.14). All part parameter distributions were assumed to be Gaussian. Tolerance
synthesis was carried out according to the platform presented in Section 5.4 and consisted
of the following stages:
1. Trail standard deviations (i.e. tolerances) for stochastic dimensional, spring and friction
parameters of models were selected.
2. Total assembly tolerance cost was calculated and checked for feasibility against
established cost constraints.
3. Trial standard deviation were conducted with sparse grid based PCE simulation. For
each UQ sampling point the CAE simulation consisted of:
a. A rotational velocity of 30 degrees per second was imposed on the rotary switch and
the interaction of components simulated for 500 ms.
b. Peak and transient resistive torque were recorded.
4. Moment estimates for the peak resistive torque KPC were returned from the UQ tool
and was calculated.
190
5. Objective function fitness was assessed with a MOGA optimization algorithm (Section
5.9.4) and new trial standard deviations were generated (return to step 1).
The optimization was terminated at the iteration limit of 300 designs (Section 5.9.2).
Figure 5.14 ‐ PIDO tolerance synthesis workflow for Case study 5.2.
5.9.3 UQ strategy
Sparse grid based PCE was incorporated into the PIDO interface for UQ in the switch
assembly CAE model. The PCE method of UQ drastically reduced the number of model
evaluations required for estimation of the first and second moments of the assembly KPCs.
The SG level was selected after comparing PCE moment estimates of resistive torque for the
initial design (Section 5.9.5 and Table 5.10) for progressively larger SG levels. Level 1
required 17 model evaluations whereas level 2 required 177. The differences between the
level 1 and level 2 estimates of mean and standard deviation were approximately 1%
and 3% (where and are defined in (5.55) and (5.56), respectively). The PCE
based moment estimates were compared against a MC estimate of 5000 samples. The
differences between the SG level 1 and MC estimates of mean and standard deviation are
approximately 2% and 4%, respectively. Based on the small differences
between SG based PCE moment estimates at different grid levels, and the small difference
between the MC and SG level 1 estimates, an isotropic SG level of 1 was selected for use in
tolerance synthesis simulation to reduce the overall number of model evaluations required.
191
5.9.4 Optimization strategy
5.9.5 Simulation results and outcomes
Simulation results are shown in Table 5.10 and Figure 5.15. The tolerance synthesis platform
was able to identify a design (design #540) with significantly superior performance to the
previous best identified in tolerance analysis case study 4.2 (Section 4.6.5.2). Compared to
the previous best, design #540 achieves a cost reduction of 40% and an increase in of
59%. Each evaluation of the CAE model took 8 seconds on a quad core 3.2 GHz CPU. Process
integration overheads amount to a time of approximately 10 seconds per design. The total
number of model evaluations was 5100 (17 for UQ and 300 for optimization) resulting in a
total simulation time of approximately 14.2 hours. If UQ was conducted through traditional
sampling methods such as MC with a relatively small sample size of 1000, the total
simulation time would be a comparatively impractical 35 days. With additional time
resources the selected design could be subjected to a local refinement to explore the
objectives space within the vicinity of the selected optimum design with greater resolution.
The final design (Design ID#540) was validated against a MC reference estimate of 5000
samples. The differences between the PCE and MC estimates of mean and standard
deviation were approximately 1% and 5%, respectively. This difference is
considered negligible.
192
Table 5.10 ‐ Case study 5.2 assembly parameters, associated variation and tolerance synthesis outcomes
Component Switch Spring Cylindrical detent Assembly
K
Rswitch Α Θ F Rball Torque Total Cost
Parameter [N/m µswitch µslider
[mm] [deg] [deg] [N] [mm] [Nmm] [Cost units]
m]
Yaw Nominal
Angle Switch‐detent Slider‐detent Total
angle peak
Switch of Spring Spring Ball dynamic dynamic Tolerance
Description of resistive
radius ramp preload rate radius friction friction cost of
ramp torque
face coefficient coefficient assembly
face (KPC)
Nominal 15 30o 0o 2 0.400 3 0.150 0.150 75
Specification
0.250 5o 3 o 0.200 0.040 0.190 0.020 0.020 7
Limits +/‐
Min. 14.750 25o ‐3o 1.800 0.36 2.810 0.130 0.123 68
o o
Max. 15.250 35 3 2.200 0.440 3.190 0.173 0.173 72
Initial
µ 15 30o 0o 2 0.400 3 0.150 0.150 75.488
σ 0.083 1.667 1 0.067 0.013 0.063 0.008 0.008 3.756
Cpm 1 1 1 1 1 1 1 1 0.620
Tolerance
73.104 1 1 13.499 70.536 98.381 112.264 112.264 482.047
Cost
Manual allocation
µ 15 30o 0o 2 0.400 3 0.150 0.150 75.404
σ 0.083 0.833 1 0.033 0.013 0.032 0.008 0.008 1.985
Cpm 1 2 1 2 1 2 1 1 1.150
Tolerance
77.818 1 1 41.137 70.536 98.381 112.264 112.264 514.399
Cost
Optimised (Design ID #540)
o o
µ 15 30 0 2 0.400 3 0.150 0.150 75.178
σ 0.145 0.169 2.767 0.027 0.008 0.023 0.019 0.038 1.261
Cpm 0.573 9.890 0.361 2.449 1.671 2.784 0.370 0.222 1.832
Tolerance
79.201 4.228 1 50.655 76.355 9.244 62.771 25.721 309.175
Cost
193
Figure 5.15 ‐ Objectives space of tolerance synthesis for Case study 5.2.
The computational time could further be reduced by conducting a sensitivity study to assess
if the influence of any part parameters on the assembly KPC is negligible. Parameters with
low influence could be held fixed during UQ to reduce the number of required integration
points. However this was deemed unnecessary, as the objective of this work is to
demonstrate the significant computational cost reduction in tolerance analysis and
synthesis through application of sparse grid based PCE even in the case of high dimensional
problems.
194
5.10 Summary of research outcomes
Tolerance synthesis in complex mechanical assemblies can impose impractically high
computational cost demands, particularly when numerical modelling of the effects of
loading on assembly functionality is required. A main contributor to computational expense
is the traditional use of robust, yet inefficient Monte Carlo (MC) sampling for uncertainty
quantification in statistical tolerance analysis. This computational expense is compounded
significantly in tolerance synthesis as it involves iteration of tolerance analysis. Previous
methods for computational expense reduction in tolerance synthesis, have mainly focused
on reducing the complexity and evaluation time of the associated assembly tolerance model
with simplifying model approximations. These approximations can however compromise the
fidelity of the tolerance model and introduce additional uncertainties.
In this chapter the feasibility of a proposed novel approach was investigated which
addresses the high computational cost of MC sampling in tolerance analysis and synthesis,
using Polynomial Chaos Expansion (PCE) uncertainty quantification. Compared to MC
sampling, PCE results in significant reductions in the number of model evaluations required
for statistical moment estimates.
The feasibility of PCE based UQ in tolerance analysis and synthesis was established with:
A theoretical analysis of the PCE method identifying working principles, implementation
requirements, advantages and limitations (Section 5.7 to 5.7.5).
Establishing of recommendations for PCE implementation in tolerance analysis including
methods of PCE coefficient calculation and approach to error estimation (Section 5.7.6
and 5.7.7)
Novel implementation in PIDO based tolerance synthesis platform (Section 5.4)
The resultant PIDO based tolerance synthesis platform (Section 5.4) integrates:
Highly efficient sparse grid based PCE UQ (Section 5.7).
Parametric CAD and FE models accommodating the effects of loading (Section 4.4).
Cost‐tolerance modelling based on exponential functions (Section 5.5.2).
Yield quantification with Process Capability Indices (PCI) (Section 5.5.2).
Optimization of tolerance cost and yield with multi‐objective Genetic Algorithm (GA)
(Section 5.8.5).
The developed PIDO based tolerance synthesis platform was validated using two industry
based case studies. The case studies include:
195
An automotive seat rail assembly consisting of compliant components subject to loading
by internal forces (Section 5.8).
An automotive switch assembly in which functionality is defined by external forces and
multi‐body dynamics (Section 5.9).
In both case studies optimal tolerances were identified which satisfied desired yield and
tolerance cost objectives. The addition of PCE to the tolerance synthesis platform resulted in
considerable computational cost reductions without compromising accuracy compared to
traditional MC methods (Sections 5.8.7 and 5.9.5). With traditional MC sampling UQ the
required computational expense is impractically high.
Key outcomes of this research are as follows:
The tolerance synthesis platform has been shown to overcome the impractical
computational expense limitations associated with the tolerance synthesis of
assemblies under loading.
PCE has been demonstrated to be effective in significantly reducing the cost of UQ in
tolerance analysis and synthesis.
The developed platform enables tolerance analysis and synthesis integration within
native CAD and FE modelling tools, thereby imposing low implementation demands.
The PIDO based integration of multi‐objective GA optimization offers effective
tolerance synthesis capability in the presence of competing objectives and constraints.
In contrast the optimization capabilities of CAT tools can be limited (Section 2.7).
Due to the use of dedicated CAE modelling tools (such as ANSYS or ABAQUS FE
modellers) the platform allows sophisticated abilities in modelling the effect of various
loads on mechanical assemblies. As such, the platform could be applied to
accommodate tolerance analysis of assemblies subject to a more general class of
loading (such as fluid flow or thermo‐mechanics).
Despite the enabling capability of the proposed platform, there are some limitations
associated with the approach:
PCE truncation error may be difficult to predict (Section 5.7.7).
Parametric CAD and FE may include fundamental limitations in: accommodating all
possible variation types defined in GD&T standards; and in representing realistic part
interactions for a given set of assembly constraints.
Accuracy errors associated with physical models used as part of CAE simulations such as
modelling simplifications.
196
6 CONCLUSION
6.1 Chapter Summary
Achieving high manufacturing efficiency requires effective management of the effects of
stochastic manufacturing variation on the performance of mechanical assemblies,
particularly in the early design process where the potential to enact change is high.
However, predicting and addressing the effects of manufacturing variation in mechanical
assembly design poses significant challenges due to influences such as: complex part
interactions, multiple variation types, loading effects, as well as competing manufacturing
process cost and capability constraints. Tolerance analysis and synthesis methods for the
management of manufacturing variation have seen extensive research and development
efforts; however specific gaps in domain knowledge and limitations in existing methods
were identified in this research. This dissertation sought to address these limitations, and
develop methods for enhancing the engineering design of mechanical assemblies involving
uncertainty or variation in design parameters. The research strategy for achieving this
objective was directed at exploiting the potential of the emerging design analysis and
refinement capabilities of Process Integration and Design Optimization (PIDO) tools. The
main research objective was the development of a computationally efficient, PIDO based
approach for tolerance analysis and synthesis of assemblies subject to loading, within the
modelling environment of existing standalone CAD/E tools. The objective was successfully
achieved through contributions in three research themes:
Design analysis and refinement accommodating uncertainty in early design;
Tolerancing of assemblies subject to loading; and,
Efficient Uncertainty Quantification (UQ) in tolerance analysis and synthesis.
This chapter presents a summary of the outcomes of this research program and identifies
associated novel contributions according to each research theme. Recommendations for
potential areas warranting further research and development are also presented.
197
6.2 Contributions
The contributions of this work are presented below according to the associated research
theme.
6.2.1 Design analysis and refinement accommodating uncertainty in early design
(Chapter 3)
This research identified that a number of specific difficulties are encountered by designers
when accommodating uncertainty and variation in early design stages. These difficulties
include:
The identification of Key Product Characteristics (KPCs) in mechanical assemblies (which
are required for measuring functional performance) without imposing significant
additional modelling and expertise demands;
Accommodating the high computational cost of traditional statistical tolerance analysis
in early design where analysis budgets are limited; and,
Identifying feasible regions and optimum performance in early design stages within the
associated large design space.
To address these difficulties a number of novel contributions were developed in Chapter 3
of this research. These contributions are categorically outlined below.
A PIDO tool based visualization method to aid designers in identifying assembly KPCs at the
concept embodiment design stage (Section 3.3) (Mazur et al. 2010)
The developed method integrates the functionality of commercial CAD software with the
process integration, UQ, data logging and statistical analysis capabilities of PIDO tools, to
simulate manufacturing variation effects on the part parameters of an assembly and
visualise assembly clearances, contacts or interferences. Visualizing variation within the
assembly assists the designer in specifying critical assembly dimensions as KPCs for
monitoring.
The method is implemented using a scripting interface between a CAD assembly model and
a PIDO tool workflow. Automated monitoring of assembly parameters potentially relevant
to the functionality of the assembly is established with the definition of measurements of
assembly dimensions such as clearances (this is facilitated by measurement tools embedded
in CAD software). Model parameters are subsequently subjected to expected manufacturing
variation using UQ techniques such as MC sampling. For each assembly instance, assembly
clash and interference analysis capabilities common in CAD software are executed using a
198
user script to automatically identify any unexpected part interferences. Images of the
assembly instances including detailed views of any interference scenarios are automatically
recorded for review by the designer. The evaluated CAD assembly model instance is stored
for reference. Utilization of embedded measurement and interference analysis capabilities
in CAD assembly environments offers rapid implementation. Visualization is carried out
using native CAD models, which are often available at the concept embodiment design
stage, thereby requiring low additional modelling effort.
The benefit of the proposed method has been validated using an industry based case study.
The method enabled the automated identification of unintended component interactions in
the concept design embodiment of an actuator used for automated folding of automotive
side view mirrors (Section 3.3.2). Key outcomes are:
The application of the developed PIDO visualization method identified a number of
undesirable part clash and contact interactions in the actuator assembly model, after it
was subjected to the expected manufacturing variation (Section 3.3.2.3). The frequency
of interactions between associated part pairs was determined, along with the sensitivity
of part parameter variation to the number of occurring interaction scenarios (Figure
3.8).
Six specific assembly regions particularly prone to unwanted part interference were
subsequently identified (Figure 3.9).
These interactions, which had not been anticipated by the designers despite their
experience with designs of this type, resulted in the specification of assembly KPCs that
would otherwise have been overlooked.
The method developed in this research can be effectively applied to aid in the identification
of KPCs without imposing significant additional modelling and expertise demands.
Estimating the sensitivity of a design to manufacturing variation in the early design stages
with statistical tolerance analysis, can reduce the costs of managing poor quality later in the
manufacturing stage when the ability to enact change is limited. However, the
computational cost of statistical tolerance analysis can be prohibitively high, especially when
many evaluations of computationally demanding models (such as FE simulations) are
necessary to model physical effects such as compliance. Additionally, the ability to carry out
199
tolerance analysis efficiently may be limited by the available tools and expertise required for
formulating tolerance models and interpreting tolerance analysis results.
In this research an efficient method for estimating sensitivity to manufacturing variation was
developed which significantly reduces computational cost, and imposes low implementation
demands, in linear‐compliant assemblies under loading. Reduction in computational cost
are achieved by utilising linear‐compliant assembly stiffness measures, reuse of CAD models
created in the conceptual and embodiment design stages, and PIDO tool based tolerance
analysis. The associated increase in computational efficiency, allows an estimate of
sensitivity to manufacturing variation to be made earlier in the design process with low
additional effort.
This method was developed as part of a benchmarking study of alternative automotive seat
rail assembly concept embodiments aimed at quantifying their sensitivity to manufacturing
variation (Section 3.4.1). The seat rail assemblies consist of two interlocking rail sections
separated by a series of rolling elements. An interference fit upon assembly elastically
preloads the rail sections which results in compliance effects due to the associated internal
loading. Estimating functionality of the rail assemblies requires an FE simulation of the
contact force between rail sections and rolling elements. Estimating the variation in
functionality with FE models and traditional statistical tolerance analysis imposes significant
computational costs, as a large number of FE model evaluations are required to provide
sufficient accuracy.
An alternative approach was developed in this research which increases computational
efficiency by taking advantage of the linear‐elastic behaviour of the rail assembly (Section
3.4.3.1). Due to the linear‐elastic condition, a measure of assembly stiffness can be used to
estimate sensitivity to manufacturing variation. Estimating the stiffness requires only 3
evaluations of the FE model, significantly reducing overall computational expense. The
benchmarking study identified significant differences in sensitivity to manufacturing
variation between alternative designs (Section 3.4.7). Rail section designs characteristics
which were found to influence sensitivity to manufacturing variation include: spacing
between upper and lower rolling elements; the number of section folds leading to the
rolling element location; and, the distance between a section fold and a rolling element.
Identifying the sensitivity to manufacturing variation allowed the designers to proceed into
the detail design stage with higher certainty of performance and with low additional
analysis expense.
200
Due to the high associated efficiency, the method may be applied at the conceptual and
design embodiment stages; thereby increasing knowledge early in the design process where
analysis time budgets for individual concepts are limited. By measuring assembly stiffness in
a similar manner, the developed method can be generalised and applied to assess the
sensitivity to manufacturing variation in other linear‐compliant assemblies whose
functionality is dependent on applied loads. The outcome is a significant reduction in the
computational cost of tolerance analysis.
For scenarios in which such a linear‐compliant approach is not valid, the contributions in
Chapter 5 present an alternative approach to reducing the computation cost of tolerance
analysis based on more efficient uncertainty quantification. Associated contributions are
summarised in Section 6.2.3.
Refinement of concept design embodiments through PIDO based DOE analysis and
optimization (Section 3.5) (Leary, Mazur et al. 2010; Leary, Mazur et al. 2011)
The conceptual and embodiment stages of the design process can be associated with a vast
design space in which regions of desirable performance are difficult to identify. In this
research a design analysis and refinement approach, based on DOE analysis and
optimization with PIDO tools, was presented which allows effective exploration of the
design space and identification of optimum regions in the presence of complex constraints
and competing objectives. The resulting increase of understanding of the design space early
in the design process, allows for design improvements to be made when overall project cost
commitments are low and design flexibility is high.
The conceptual design of automotive seat kinematics was presented as a highlighting case
study. Automotive seat kinematics systems are associated with a large design space due to
the number of design parameters (link lengths and frame position) and associated
permutations (Section 3.5.1). Furthermore, multiple constraints and objectives hinder
systematic optimization efforts. Historically, such problems were solved by inspection, or
with graphical, or numeric aids. This research has presented an approach for resolving these
conflicting design requirements at the conceptual design stage by mapping the feasible
design space and rapidly identifying regions of high performance.
The capabilities of PIDO tools were utilised to allow CAE tool integration, and efficient reuse
of models created in the conceptual and embodiment design stages, to rapidly identify
optimal regions in the design space (Section 3.5.2). The design refinement and analysis
approach consisted of two phases: initial analysis and design refinement. The initial phase
201
involved a high resolution DOE analysis which identified regions of infeasible performance in
the design space. Subsequent refinement was carried out with a multi‐objective GA
optimization analysis initialised with the identified feasible design space. The optimization
problem involved a complex set of four competing objectives and seven constraints. The
total simulation size involved 122,880 model evaluations in which 20,204 feasible designs
were identified with 304 designs being Pareto‐optimal (Figure 3.23). Designer preference
objective weighting was applied and the Pareto‐optimal design space was reduced to three
designs of interest which offered a balanced compromise between competing objectives.
An identified Pareto‐optimal concept was selected for detail design and manufacture by the
industry partner. The selected design was found to offer the best performance in achieving
a vertical seat travel objective with the least number of manual actuations (these are
actuations required to lift the seat for a given fixed lift effort) (Table 3.10). The superior
performance against competitors in seat actuation demands was a determining factor for
the selection of the design in the seat assembly of the Tesla Motors Model S full‐sized
electric sedan currently on sale in the United States.
The presented PIDO based DOE analysis and optimization approach to exploring the design
space in the conceptual and embodiment design stages can be broadly applied. An example
of conceptual wheelchair design was used to demonstrate the benefits of the developed
approach (Section 3.5.3) (Burton et al. 2010; Leary et al. 2012).
This contribution has highlighted the benefits of investigating the conceptual and
embodiment design space through DOE analysis and optimization, with a practical case
study. The outcome has resulted in the successful commercialisation of an automotive seat
kinematics design.
The novel contributions presented in Chapter 3 have been demonstrated to enhance the
design of mechanical assemblies involving uncertainty or variation in design parameters, in
the early stages of design. The contributions have addressed the domain knowledge
limitations identified in research associated with: the identification of KPCs; accommodation
of the high computational expense of tolerance analysis; and, identification of optimal
regions in broad design spaces. Practical conceptual and embodiment design problems were
considered, and effective solutions developed for a number of industry relevant scenarios.
The use of PIDO tools and native CAD/E models developed as part of an established design
modelling procedures enables these contributions to be applied with low additional
modelling effort. The outcomes allow designers to make informed decisions which positively
influence the design early in the design process while cost commitments are low.
202
6.2.2 Tolerancing of assemblies subject to loading (Chapter 4)
Despite the extensive capability of existing analytical and numerical methods in addressing
tolerance analysis problems in complex mechanical assemblies, this research identified that
key limitations remain. Particular limitations are associated with the ability to
comprehensively accommodate tolerance analysis problems in which assembly functionality
is dependent on the effects of loading (such as compliance or multi‐body dynamics). Current
methods are limited by: the ability to accommodate only specific loading effects; reliance on
custom simulation codes with limited implementation in practical and accessible software
tools; and, the need for additional expertise in formulating specific assembly tolerance
models and interpreting results.
Computer Aided Tolerancing (CAT) tools were identified that accommodate a limited subset
of loading effects such as deformation of sheet assemblies (Section 2.7). However, in
general the abstracted geometric tolerance models employed in current CAT systems are
incompatible with tolerance analysis involving a general class of problem requiring numeric
simulation of assemblies under loading. As such, this research identified that there is a lack
of an accessible approach to tolerance analysis of assemblies subject to a general class of
loading effects, which integrates effectively into the established CAD/E design framework.
These limitations were addressed in Chapter 4 of this work through the development of a
novel tolerance analysis platform which integrates CAD/E and statistical analysis tools, using
PIDO tool capabilities, to facilitate tolerance analysis of assemblies subject to loading
(Section 4.4). Integration was achieved by developing script based links between standalone
CAD/E software, through commonly embedded scripting capabilities, and the process
integration facilities of PIDO tools. This integration allows for tolerance analysis through:
Definition of a parametric CAD assembly model, including tolerance types and datums,
for features of interest as well as part relationships such as assembly sequence and
mating conditions (Section 4.4.2).
Numerical simulation of the effect of loads on assembly functionality by integration of
CAD models with CAE tools such as FE modellers (Section 4.4.3).
Uncertainty quantification (Section 4.4.4) and yield estimation (Section 4.4.6) with the
statistical simulation capabilities of PIDO tools.
Storage of simulation inputs and results in a variation database (Section 4.4.5).
Key contributions of this research are:
The proposed platform extends the capabilities of traditional CAT tools and methods by
enabling tolerance analysis of assemblies which are dependent on the effects of loads.
203
The ability to accommodate the effects of loading in tolerance analysis allows for an
increased level of capability in estimating the effects of variation on functionality.
The interdisciplinary integration capabilities of the PIDO based platform allow for CAD/E
models created as part of the standard design process to be used for tolerance analysis.
The need for additional modelling tools and expertise is subsequently reduced.
The use of specialised CAE modelling tools such as FE modellers (for example ABAQUS or
ANSYS) or multi‐body dynamics simulation codes (such as MSC ADAMS) allows
sophisticated abilities in modelling the effect of various loads on mechanical assemblies.
As such, the application of the platform can be extended to accommodate tolerance
analysis of assemblies subject to a more general class of load (for example thermo‐
mechanics or fluid flow) as well as challenging scenarios involving transient or non‐linear
response.
To demonstrate the capabilities of the developed platform, industry based case study
tolerance analysis problems involving assemblies subject to loading were presented. The
specific outcomes of those case studies are summarised below.
Case study 4.1: An automotive actuator assembly in which functionality is defined by
compliance of part geometry due to external loading (Section 4.5).
This case study addressed a tolerance analysis problem involving an automotive actuator
assembly consisting of a rigid spigot and complaint spring undergoing compression due
to external loading. Functional characteristics required clearance between the spigot
wall and the spring at all times (while minimising overall packaging space) and that
manufactured assemblies achieve this requirement with a yield of = 1 (i.e. 99.7%).
The objective was to specify nominal spigot diametral dimension (parameter in
Figure 4.3) such that the required assembly yield is achieved when the assembly
parameters are subjected to the expected manufacturing variation.
No analytic solution directly applicable to the dilation of the squared and ground
compression spring used in the actuator was found in the literature (Section 4.5.1). This
problem was therefore a suitable case study for the proposed platform as numerical
models are required. The platform proposed in this work was applied to overcome these
limitations.
To estimate the expected manufacturing variation associated with the spigot and spring
components, existing manufacturing process data for similar components was analysed
and used as input into subsequent tolerance analysis simulations (Section 4.5.3).
204
Parametric CAD and FE models of the spigot and compliant spring were developed
(Figure 4.4) and interfaced using a PIDO tool according to the proposed platform (Figure
4.5). The resulting model was used to estimate expected assembly yield with a Monte
Carlo UQ simulation based on a sample size of 1000.
The tolerance analysis platform was applied to identify that for initially specified
tolerances the assembly yield was unacceptable at = 0.68 (97%). The nominal
diametral pocket dimension required to achieve the desired assembly yield
Case study 4.2: An automotive switch assembly in which functionality is defined by external
loading and internal multi‐body dynamics (Section 4.6).
The second case study involved an automotive rotary switch in which a resistive
actuation torque is provided by a spring loaded radial detent acting on the perimeter of
the switch body. Functional characteristics required that the peak resistive torque for
switch actuation be within an ergonomically desirable range. As such, assembly
functionality is defined by both frictional and multi‐body dynamics loading effects. The
case study objective was to specify detent and spring parameter tolerances, such that a
peak resistive torque specification limit of 75 ± 7 Nmm be achieved, with a yield
requirement of 1, (i.e. 99.7%).
A parametric numerical model of the switch assembly was constructed in MSC ADAMS
multi‐body dynamics modelling software (Figure 4.12) accommodating the possible
variation within geometric and physical assembly parameters (such as spring pre‐load,
spring stiffness and friction coefficients). No directly comparable algebraic model for the
switch assembly was identified, however, an approximate model was used to confirm
that the numerically predicted results were of a similar magnitude (Section 4.6.4).
Existing manufacturing process data for similar components was used as input into
subsequent tolerance analysis simulations (Section 4.6.3).
The numerical model was interfaced with a PIDO tool according to the developed
platform (Figure 4.14) and subjected to Monte Carlo UQ simulation with a sample size of
1000. The simulation results showed that the required assembly yield requirements
were not met (achieved 0.62, required 1) with initially specified part
parameter tolerances (Table 4.4).
Based on the outcome of the initial MC simulation, the required process capabilities for
the most influential parameters were increased (Table 4.4). A subsequent simulation
205
successfully identified the part parameter tolerances required to achieve the required
assembly yield (Section 4.6.5.2).
The presented case studies demonstrated that the PIDO based tolerance analysis platform
developed in this research has proven benefit in solving practical tolerance analysis
problems, involving assemblies subject to various loading effects. The platform offers an
accessible tolerance analysis approach with low implementation demands due to
integration with the established CAD/E modelling design framework. It provides capability
and flexibility which is not otherwise available with existing CAT tools. However an
associated limitation is the potentially high computational cost of traditional uncertainty
quantification with tolerance models which are computationally demanding to evaluate
(such as FE models of the effects of loads on mechanical assemblies). The contributions of
Chapter 5 respond to this limitation.
6.2.3 Efficient uncertainty quantification in tolerance analysis and synthesis (Chapter 5)
The cost of tolerance synthesis involving demanding assembly models (particularly
assemblies under loading) can often be computationally impractical. The high
computational cost is mainly associated with traditional statistical tolerancing Uncertainty
Quantification (UQ) methods reliant on low‐efficiency Monte Carlo (MC) sampling.
Chapter 5 responded to the hypothesis that Polynomial Chaos Expansion (PCE) based UQ is
feasible in tolerance analysis, and that the significant reduction in computational costs
associated with PCE can enable the PIDO based tolerance analysis platform developed in
Chapter 4 to be extended to allow multi‐objective, tolerance synthesis in assemblies subject
to loading. This hypothesis was assessed and the feasibility of using PCE for UQ in the
developed PIDO tolerance analysis and synthesis platform was subsequently established.
This was achieved by:
A theoretical analysis of the PCE method identifying working principles, implementation
requirements, advantages and limitations (Section 5.7 to 5.7.5). This analysis identified
that the PCE method meets the requirements for implementation in tolerance analysis
in the context of this research, namely: non‐intrusive nature that is applicable to
integration with existing CAD/E and PIDO tools; high efficiency and accuracy; flexibility in
accommodating various input parameter distributions; and ability to efficiently
accommodate problems with high dimensionality.
Identification of a preferred method for determining PCE expansion coefficients in
tolerance analysis. A number of methods were assessed (Section 5.7.5). It was found
206
that the effective applicability of each PCE coefficient determination method depends
on the dimensionality of the problem, as well as the order of the PCE expansion; both of
which are dictated by the UQ simulation requirements. It was concluded that a sparse
grid based stochastic projection technique for determining PCE coefficients be adopted
in tolerance analysis and synthesis (Section 5.7.6). This is due to superior efficiency (the
least number of required model evaluations) for a broad range of problem
dimensionality (as demonstrated in Table 5.2 and Table 5.4.).
Formulation of an approach for the validation of PCE moment estimates (Section 5.7.7).
It was found that a reference MC moment estimate is currently the most broadly
applicable and robust indicator of PCE moment estimates for implicit systems. It is
recommended that MC estimates be used to validate initial and final designs carried out
as part of tolerance synthesis. This approach was successfully adopted in the case
studies considered in this work and showed that PCE moment estimates closely match
those obtained with traditional MC simulation.
PCE based UQ was subsequently implemented in a PIDO based tolerance synthesis platform
for assemblies subject to loading (Section 5.4). This was achieved by extending the tolerance
analysis platform developed in Chapter 4, with the additional development of the script
based links between standalone CAD/E software to include additional PCE implementation
codes and cost‐tolerance models. The resultant PIDO based tolerance synthesis platform
(Section 5.4) integrates: Highly efficient sparse grid based PCE UQ (Section 5.7); Parametric
CAD and FE models accommodating the effects of loading (Section 4.4); Cost‐tolerance
modelling (Section 5.5.2); Yield quantification with Process Capability Indices (PCI) (Section
5.5.2); and, Optimization of tolerance cost and yield with multi‐objective Genetic Algorithm
(GA) (Section 5.8.5).
The PIDO tolerance synthesis platform incorporating PCE UQ was validated using two
industry relevant case studies. The specific outcomes of those case studies are summarised
below.
Case study 5.1: An automotive seat rail assembly consisting of compliant components
subject to internal loading (Section 5.8).
This case study significantly extended the work of Section 3.4 and addressed a tolerance
synthesis problem involving an automotive seat rails assembly consisting of two
interlocking rail sections separated by a series of rolling elements. The rail sections are
preloaded elastically by an interference fit upon assembly and are subject to compliance
effects as a result of the associated internal loading. The objectives of tolerance
207
synthesis were to identify optimal rail section bend angle and bend radii tolerances,
which: maximize the number of assemblies meeting a rail rolling effort force
requirement of 35 ± 8 N with a corresponding minimum required assembly yield of
= 1 (i.e. 99.7%); and, minimize the associated total tolerance cost with a maximum
allowable cost of 8000 cost units.
A rail assembly tolerance model was constructed consisting of: a CATIA based CAD
model defining rolling element position; and, an ABAQUS FE model estimating bearing
contact force due to internal loading (Section 5.8.3). Tolerance cost was modelled using
cost‐tolerance curves developed in conjunction with an industry partner (Figure 5.7 and
Figure 5.8). This was based on CMM metrological measurements conducted on 24 rail
assembly sets currently produced by the industry partner (Section 5.8.2).
The tolerance models were interfaced according to the developed PIDO tolerance
synthesis platform (Figure 5.10). UQ was conducted using a sparse grid based PCE
approach (Section 5.8.4).
Initial tolerance analysis revealed that the yield, which would be achieved if the case
study rail assembly were to be manufactured with the same process standards as the
measured rails, were unsatisfactorily low at = 0.12. Tolerance synthesis was
subsequently conducted and a number of Pareto optimal designs with higher yield were
identified. From the identified Pareto optimal designs (Figure 5.12) the lowest cost
design (Design #114) was selected as the preferred candidate. The selected design offers
the lowest tolerance cost while exceeding the yield requirement of 1 by 11%
(achieved 1.11). The higher yield allows for conservatism in the estimated
results.
The associated PCE based moment estimates for the initial and final designs were
compared against a reference MC estimate of 3000 samples. The differences for the
initial design between PCE and MC estimates of mean and standard deviation were
approximately 3% and 5%, respectively (Section 5.8.4). Tor the final design
(Design ID#114) the differences between the PCE and MC estimates of mean and
standard deviation was approximately 3% and 4%, respectively (Section
5.8.7). This difference is considered negligible.
The implementation of sparse grid based PCE for UQ reduced the computational cost of
tolerance synthesis by a factor of 22, compared to a traditional MC based approach
(Section 5.8.7). The computational cost associated with the tolerance synthesis would be
impractically high if conducted using traditional MC based UQ, whereas the sparse grid
based PCE method provides a feasible alternative.
208
Case study 5.2: An automotive switch assembly in which functionality is defined by external
forces and internal multi‐body dynamics (Section 5.9).
The case study focused on the allocation of tolerances in rotary switch assembly subject
to loading. This presented a significant extension of the tolerance analysis problem
presented in case study 4.2 by address a tolerance synthesis problem in which an
optimal set of part tolerances is identified, minimising manufacturing cost while
maximising assembly yield. The objectives were to specify detent and spring parameter
tolerances, which: maximize the number of assemblies conforming to the peak resistive
torque specification requirements of 75 ± 7 Nmm (corresponding to = 1); and,
minimise the total cost of required part tolerances with a maximum allowable tolerance
cost of 515 cost units. Tolerance cost was modelled using cost‐tolerance curves
developed in conjunction with an industry partner (Figure 5.13).
A parametric numerical model of the switch assembly (Section 4.6.4) was interfaced
with the developed PIDO tolerance synthesis platform (Section 5.4). Tolerance synthesis
was subsequently conducted and a number of Pareto optimal designs were identified.
The tolerance synthesis platform was able to identify a design (design #540 in Table 5.10
and Figure 5.15) with significantly superior performance to the previous best identified
in tolerance analysis case study 4.2 (Section 4.6.5.2). Compared to the previous best,
design #540 achieves a cost reduction of 40% and an increase in of 59%.
The associated PCE based moment estimates for the initial and final designs were
compared against a reference MC estimate of 5000 samples. The differences for the
initial design between PCE and MC estimates of mean and standard deviation were
approximately 2% and 4%, respectively (Section 5.9.3). For the final design
(Design ID#540) the differences between the PCE and MC estimates of mean and
standard deviation was approximately 1% and 5%, respectively (Section
5.9.5). This difference is considered negligible.
The implementation of sparse grid based PCE for UQ reduced the computational cost of
tolerance synthesis by a factor of 59, compared to a traditional MC based approach
(Section 5.9.5). The computational cost associated with the tolerance synthesis would be
impractically high if conducted using traditional MC based UQ
In both case studies optimal tolerances were identified which satisfied yield and tolerance
cost objectives. The addition of PCE to the tolerance synthesis platform resulted in large
computational cost reductions without compromising the accuracy achieved with traditional
MC methods.
209
This research has contributed to addressing the high cost of UQ with the novel integration
of highly efficient Polynomial Chaos Expansion (PCE) based UQ with tolerance analysis and
synthesis. The resulting significant reduction in computational costs enable the PIDO based
tolerance analysis platform developed in Chapter 4 to be further extended to allow multi‐
objective, tolerance synthesis in assemblies subject to loading. The resulting tolerance
synthesis platform can be applied to tolerance analysis and synthesis with significantly
reduced computation time while maintaining accuracy.
Key contributions of Chapter 5 are:
The tolerance synthesis platform has been show to overcome the impractical
computational expense limitations associated with traditional, low‐efficiency Monte
Carlo based UQ tolerance synthesis of assemblies under loading.
PCE has been demonstrated to be effective in significantly reducing the cost of UQ in
tolerance analysis and synthesis.
The developed platform enables tolerance analysis and synthesis integration within
native CAD and FE modelling tools, thereby imposing low implementation demands.
Due to the use of standalone CAE modelling tools, the effect of various loads on
mechanical assemblies can be accommodated by broadening the application of the
platform.
The PIDO based integration of multi‐objective GA optimization offers effective
tolerance synthesis capability in the presence of competing objectives and constraints.
In contrast the optimization capabilities of CAT tools can be limited (Section 2.7).
6.3 Future work
This research developed novel PIDO based methods that have the capacity to improve the
design of mechanical assemblies involving uncertainty or variation in design parameters.
The main contribution focused on efficient tolerance analysis and synthesis of assemblies
subject to loads, within the native CAD/E modelling environment, using a PIDO tool based
platform. The effectiveness of the research outcomes of this work is validated with a
number of practical, industry related case studies. Although the contributions are well
established, further research prospects exist. This section provides a list of potential
avenues for future work associated with the general research objectives of this project.
Significant additional reductions in the computational cost of UQ associated with
tolerance analysis can be achieved, if the assembly response functions exhibits
pronounced anisotropic sensitivity to the associated part parameters. Such system
210
anisotropy can be exploited with anisotropic sparse grid quadrature PCE techniques.
These techniques use a different univariate quadrature rule level for different sparse
grid dimensions, effectively reducing the total number of points used in the sparse grid
(and subsequently reducing the number of system model evaluations) (Section 5.7.5.5).
However in complex systems (such as mechanical assemblies with many parts and
features) the nature of anisotropy is often not known a priori, and as such, an active
research focus of PCE based UQ has been the development of effective anisotropic
approaches with automated adaptivity. Adaptive techniques aim to automatically
discern the system anisotropy by progressively increasing quadrature rule levels in
selected dimensions, and monitoring the associated influence on the system outputs.
Recent advances in the effectiveness of these techniques offer promising prospects for
further UQ cost reduction in tolerance analysis and should be investigated further
(Jakeman et al. 2011; Jakeman et al. 2011; Liu et al. 2011).
Recent advances have been made in the development of tools for facilitating
grid/distributed computing capabilities with standalone CAD/E software. Grid
computing has the capability to accelerate simulation times by enabling sharing of
computational resources over multiple computers networked through conventional
interfaces, such as Ethernet (Rawat et al. 2009; Mo et al. 2010; Pan et al. 2010). The
capabilities allow the computer workstation infrastructure available in typical
engineering design office environments to be utilised for simulation during times when
it is idle (such as overnight). A central distributed scheduling tool installed on one main
workstation coordinates multiple simulation instances over a number of workstations in
the grid network; simulation outputs are subsequently compiled on the main
workstation. The CAD/E software tools required for simulation need to be installed
locally on each workstation, along with a grid communication module. As multiple
CAD/E workstation software licences are a common scenario in engineering design
environments, this approach can increase utilization of available resources with low
implementation demands. The PIDO tolerance analysis and synthesis platform
developed in this research is compatible with grid computing tools. This capability
offers an interesting opportunity for further research.
A number of Mesh morphing methods have been developed for directly modifying FE
model dimensions without requiring geometric changes to associated underlying CAD
models (Owen et al. 2010; Franciosa et al. 2011). Changes in geometry are achieved by
directly reshaping the mesh elements at nodal co‐ordinate levels. Reducing the need for
CAD model updates can decrease overall computational cost of tolerance analysis of
211
assemblies requiring FE models to quantify the effects of loading. The feasibility of
incorporating mesh morphing into the tolerance analysis and synthesis platform
developed in this work offers interesting prospects for future research.
212
APPENDICES
213
A. TOLERANCING SCHEMES
A.1 Dimensional tolerancing
Dimensional tolerancing specifies the acceptable size of a part feature and any associated fit
between features of mating parts. The size of a feature is specified by a nominal (basic) size,
and an associated Lower specification limit and Upper Specification Limit (LSL and USL4,
respectively). The tolerances that can be specified are limited to an allowable plus or minus
variation on the linear dimensions of a feature. The dimensional tolerancing method is
straightforward and used extensively, but is subject to a series of deficiencies (Voelcker et
al. 1993; Voelcker 1998; Cogorno 2006):
1. Dimensional tolerancing methods typically result in rectangular tolerance zones which
may not capture the intended assembly functionality. For example, rectangular
tolerance zones are not well suited for tolerancing cylindrical features.
2. The interaction between tolerances is not accommodated. This may result in scenarios
where a tolerance is unnecessarily stringent.
3. Dimensional tolerancing does not accommodate the relative importance of datums.
A.2 Geometric Dimensioning and Tolerancing (GD&T)
To overcome the deficiencies of dimensional tolerancing, a more robust tolerancing scheme
was developed based on allowable geometric volumes. Geometric tolerancing allows for the
definition of tolerance types which are not limited to linear dimensions of a part feature,
but also accommodate geometric characteristics such as variation in surface flatness. This
tolerancing scheme, known as Geometric Dimensioning and Tolerancing (GD&T) was
developed and standardised based on conventions developed from an accumulation of
4
The terminology Maximum Material Condition (MMC) and Least Material Condition (LMC) are also applied
(ASME 2009). The MMC is associated with the largest allowable component volume, and is therefore
dependant on the dimension type. For example, holes remove material and the MMC is equivalent to the LSL
hole diameter; cylindrical features add material and the MMC occurs for the USL hole diameter. USL and LSL
will be used in this work as they are associated with the dimension magnitude and are independent of
whether the dimension increases or decreases part volume.
214
empirical knowledge and engineering practice within industry over many decades (Voelcker
et al. 1993; Voelcker 1998; Shah et al. 2007).
The GD&T approach defines tolerance zones within which a feature is allowed to vary. In
addition to the capabilities of dimensional tolerancing, a geometric tolerancing approach
accommodates specific types of geometric variations in feature parameters such as
orientation, location, form, profile and runout. The specific types of geometric tolerances
are classified according to their functionality:
Orientation – specifies the permitted rotation of a feature relative to a datum.
Location – specifies the allowable deviation in the location of a feature from a desired
nominal location specified by a datum.
Form – specifies the amount a surface of a feature is allowed to deviate from the desired
nominal. The nominal surface is taken as the datum.
Profile – comparable to form tolerances but are specified to a datum external to the
feature.
Runout – defined in terms of the radial variation from a true datum circle measured at
one axial location of cylindrical feature (circular runout), or by the circular runout
measured along the entire axis of the cylindrical feature (total runout).
The variation types are represented with standardised symbols and notations (Table A.1).
Additional concepts such as virtual and resultant condition boundaries and material
condition modifiers are also defined in GD&T tolerancing schemes. These are formalised in
numerous drawings standards such as:
ASME Y14.5 ‐ 2009 Dimensioning and Tolerancing (ASME 2009)
ISO 1101:2005 Geometrical Product Specifications (GPS). Tolerancing of form,
orientation, location and run‐out (ISO 2005).
ISO 5458:1998 Geometric Product Specifications (GPS). Positional tolerancing (ISO 1998).
ISO 286‐1:1988 ISO system of limits and fits (ISO 1988)
GD&T standards for three‐dimensional CAD include:
ASME Y14.41‐2003 Digital Product Definition Data Practices (ASME 2003)
ISO 16792:2006 Technical product documentation – Digital product definition data
practices (ISO 2006)
215
Table A.1 ‐ GD&T variation types and standardised symbols (after ANSI Y14.5).
Geometric
Geometric variation GD&T Symbol
variation
type (ANSI Y14.5)
category
Angularity
Orientation Parallelism
Perpendicularity
Concentricity
Location Positional
Symmetry
Circularity
Cylindricity
Form
Flatness
Straightness
Profile of a line
Profile
Profile of a surface
Circular runout
Runout
Total runout
The most widely adopted standards are ASME Y14.5 and ISO 1101 which offer a
comprehensive specification of GD&T principles. These principles are implemented by the
use of a Feature Control Frame (Figure A.1). The feature control frame is a standardised
graphical framework which conveys information concerning geometric tolerancing symbols,
tolerance values, and datum references. Feature control frames are associated with
dimensions, datum references, or features in engineering drawings. Feature Control Frames
typically have a standardised GD&T variation type symbols in the first cell, a tolerance value
with a zone descriptor and material modifier (if any) in the second cell, followed by cells
having Datum references which depend on the design specification that the tolerance
definition is intended to convey. For example, the feature control frame in Figure A.1 is to
be interpreted as "The position of the axis of a feature when produced within allowable size
limits can be off‐centre within a diametral tolerance zone of 0.15 when produced at
Maximum Material Condition. The feature is primarily located on Datum A, secondarily on
Datum Feature B when produced at Maximum Material Condition, and Datum C as Tertiary
references."
216
Figure A.1 ‐ GD&T tolerance control frame
GD&T standards overcome the deficiencies of dimensional tolerancing by (Roy et al. 1991;
Juster 1992; Yu et al. 1994; Hong et al. 2002; ASME 2009):
2. Where an interaction between tolerances zones exists it is explicitly defined. For
example Figure A.2 shows a plate with a semi‐circular hole feature with centre position
and diameter tolerances. As the hole diameter increases towards its USL (the LMC), the
tolerance associated with the centre position may be increased without compromising
function. Conversely, when the diameter is at the LSL (the MMC) the centre position
tolerance zone is smallest (at 0.1 mm). This relationship can be accommodated using the
GD&T datum material condition modifier (circle M).
3. Datum precedence is accommodated. In Figure A.2 the semi‐circular hole feature is
located relative to both the lower plate edge and side plate edge. When dimensioned
with traditional tolerancing methods (Figure A.2 (i)) the priority of dimension locations is
ambiguous when a part is subject to manufacturing variation (Figure A.2 (ii) and (iii)).
GD&T principles allow the relative importance of datums to be specified to eliminate
ambiguity (Figure A.2 (iv)). The feature control frame associated with the hole centre
position states that the order of datum precedence is alphabetical i.e. datum A takes
precedence of datum B.
Despite overcoming many of the deficiencies of dimensional tolerancing, geometric
tolerancing has its own shortcomings. As GD&T is a codification of accumulated empirical
knowledge and industrial engineering practice, it is not founded on explicitly defined
217
mathematical principles, but rather defined through graphics and example scenarios (Shah
et al. 2007). This lack of mathematical formality can lead to ambiguity in interpretation of
GD&T standards and difficulty in integration with CAD modelling tools (Voelcker et al. 1993;
Voelcker 1998).
(i) (iii)
(ii)
Figure A.2 ‐ Traditional dimensional tolerancing (i). Traditional tolerancing involves datum ambiguity in
manufactured parts (ii). GD&T tolerancing including alphabetical datums precedence specification
(iii) eliminates ambiguity.
A.3 Vectorial tolerancing
An alternative which aims to address the deficiencies of both dimensional and GD&T based
tolerancing schemes is vectorial tolerancing (Wirtz 1993). The vectorial tolerancing scheme
aims to define all variation in geometrical parts and assembly parameters in terms of vector
models. The vector models are constructed in a manner which is intended to be applicable
not only to tolerancing schemes but also to other dimensional control activities such as CAD
modelling, CNC machining and metrological measurements with Coordinate Measurement
Machinery (CMM) tools (Wirtz et al. 1993). The objective is to avoid the ambiguity and
218
integration problems encountered with GD&T standards by providing a comprehensive
dimensional control model based on the same fundamental model.
Despite the potential advantages of the vectorial tolerancing approach, a significant
drawback is the lack of compatibility with the syntax and semantics of dimensional and
GD&T tolerancing schemes which are broadly established both institutionally and
industrially. The substitution of the currently accepted tolerancing schemes, with their long
historical tradition, with an alternative model has proven difficult and is the main reason for
the lack of adaptation of methods such as vectorial tolerancing (Britten et al. 1999)
219
B. PROCESS CAPABILITY
B.1 Process capability index – Cp
The index measures the potential of a process to produce outputs within the
specification limits (Equation (A.57)). increases with a decreasing standard deviation of
the process output. Although it is readily calculated, it does not measure whether the
process is centred on the specified nominal output value. An output distribution that has a
low standard deviation but is skewed towards a specification limit would result in the same
index as a distribution that is centred (Figure A.4). Therefore overestimates process
capability if the process mean is non‐centred, i.e. the nominal and mean values differ
(Figure A.3 and Figure A.4). Furthermore the index assumes that the nominal target value
is at the midpoint of the specification limits and different target values are not
accommodated.
(A.57)
6
Low Cp High Cp
B.2 Process capability index – Cpk
The index is applied to measure both the ability of a process to produce an output that
is centred and within the specification limits. reduces the index value if the process is
not centred (Equation (A.58)). If the process output mean is skewed away from the nominal
target value, becomes smaller than . The index however does not accommodate
a process in which the nominal is not at the mid‐point of the specification limits, and in such
a scenario underestimates process capability. For example, all the distributions in Figure
220
A.4 have the same index despite a difference in conformance to the specification limits;
the index however accommodates the difference due to the mean shift.
, , (A.58)
3 3
Low Cpk High Cpk
Mean
Mean
Mean
LSL Nominal USL LSL Nominal USL LSL Nominal USL
Figure A.4 ‐ Process output distributions of equal standard deviation and increasing centring.
Cp is equal for all distributions. Cpk increases from left to right with increasing centring.
B.3 Process capability index – Cpm
is similar to yet includes the capability to accommodate an arbitrary nominal value,
i.e. asymmetric specification limits. The index measures both the ability of a process to
achieve an arbitrary target nominal value, , and the specification limits (Chan et al. 1988)
(Equation (A.59)). addresses the drawbacks associated with the and indices
however incurs a comparatively higher computational cost. increases as the process
mean approaches the target nominal value (i.e. more centred) and as the standard
deviation decreases. is particularly useful when the target nominal value is not at the
mid‐point of USL and LSL i.e. non‐symmetric specification limits (e.g. Figure A.5).
Mean
221
B.4 Process capability indices – Non‐normal distributions
The above process capability indices are based on the assumption of a normally distributed
manufacturing process. Consequently, non‐normal distributions can result in an incorrect
estimate of the process capability. Non‐normal distributions can be correctly
accommodated by using distribution transformations to transform the process data into a
normal distribution (Section 2.3.3) or applying another set of process indices applicable to
non‐normal distributions (Somerville et al. 1996).
The transformation approach needs to be carried out iteratively. Following a transformation
the distribution of the transformed data needs to be tested for normality. If the transformed
data is not indicative of a normal distribution then the transformation parameters are
modified and the process repeated. When the data is close to normal, the standard process
indices applicable to normal distributions can be applied (English et al. 1993).
Process indices applicable to non‐normal distributions are typically based on finding the
equivalent normal distribution which would give the same yield as the non‐normal
distribution under analysis. An example of a process index applicable to non‐normal
distributions is the non‐parametric capability index given by (McCormack Jr et al.
2000):
Where: (A.60)
,
. . is the median of the distribution
. is the 99.5th percentile of the process data
. is the 0.5th percentile of the process data
The non‐parametric capability index is based on analysis of empirical non‐normal
distributions which indicates that an interval of acceptance between the 99.5th and 0.5th
percentile of the non‐normal process data is often equivalent to a normal process with the
standard 3 interval of acceptance (corresponding to 99.7% yield). The approach is
founded on certain assumptions which need to be considered before application
(McCormack Jr et al. 2000).
Application of transformation techniques is considered the preferred approach for
accommodating non‐normal distributions (Chandra 2001).
222
REFERENCES
(ASTM), A. S. f. T. a. M. (2007). ASTM A228 / A228M ‐ 07 Standard Specification for Steel Wire, Music
Spring Quality. Pennsylvania, American Society for Testing and Materials (ASTM).
(DIN), D. I. f. N. (1982). DIN 16 901 ‐ Plastics Mouldings ‐ Tolerances and acceptance conditions for
linear dimensions. Berlin Deutsches Institut für Normung (DIN).
Adams, B. M., Bohnhoff, W.J., Dalbey, K.R., Eddy, J.P., Eldred, M.S., Gay, D.M., Haskell, K., Hough,
P.D., and Swiler, L.P., (2011). DAKOTA, A Multilevel Parallel Object‐Oriented Framework for
Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity
Analysis, Sandia Technical Report SAND2010‐2183, December 2009. Updated December
2010 (Version 5.1) Updated November 2011 (Version 5.2).
Alt, W. (1990). "The Lagrange‐Newton method for infinite‐dimensional optimization problems."
Numerical Functional Analysis and Optimization 11(3‐4): 201‐224.
Anwarul, M. and M. Liu (1995). Optimal manufacturing tolerance: The modified Taguchi approach.
Archibald, R. K., R. Deiterding, C. Hauck, J. Jakeman and D. Xiu (2012). Approximation and error
estimation in high dimen‐sional space for stochastic collocation methods on ar‐bitrary
sparse samples, Oak Ridge National Laboratory (ORNL); Center for Computational Sciences.
Armen Der Kiureghian, M. and P. L. Liu (1986). "Structural reliability under incomplete probability
information." Journal of Engineering Mechanics 112: 85.
Artto, K. A., J. M. Lehtonen and J. Saranen (2001). "Managing projects front‐end: incorporating a
strategic early view to project management with simulation." International Journal of Project
Management 19(5): 255‐264.
Askin, R. G. and J. B. Goldberg (1988). "Economic optimization in product design." Engineering
Optimization 14(2): 139‐152.
ASME (2003). Y14.41‐2003 Digital Product Definition Data Practices
ASME (2009). Y14.5‐2009 ‐ Dimensioning and Tolerancing. New York, American Society of
Mechanical Engineers.
Atkinson, K. E. (2009). An Introduction To Numerical Analysis, 2Nd Ed, Wiley India Pvt. Ltd.
Audet, C. and J. E. Dennis Jr (2006). Analysis of generalized pattern searches, DTIC Document.
Avriel, M. (2003). Nonlinear Programming: Analysis and Methods, Dover Publications.
Bäck, T. (1996). Evolutionary algorithms in theory and practice: evolution strategies, evolutionary
programming, genetic algorithms, Oxford University Press, USA.
Barker, C. R. (1985). "A complete classification of planar four‐bar linkages." Mechanism and Machine
Theory 20(6): 535‐554.
Bastow, D. (1976). Kinematic and Dynamic Data for Crank‐Rocker and Slider‐Crank Link‐ages.
London, Engineering Sciences Data Unit.
Batavia, R. (2001). Front‐End Loading for Life Cycle Success.
Baumgartner, G. (1995). Potential Failure Modes and Effects Analysis (FMEA) Reference Manual.
Michigan, Automotive Industry Action Group.
Bedford, A. and W. Fowler (2008). Engineering Mechanics Statics & Dynamics, Prentice Hall.
Bergman, B., J. De Mare, T. Svensson and S. Loren (2009). Robust Design Methodology for Reliability:
Exploring the Effects of Variation and Uncertainty, John Wiley & Sons.
Berveiller, M., B. Sudret and M. Lemaire (2006). "Stochastic finite element: a non intrusive approach
by regression." Revue Européenne de Mécanique Numérique‐Volume 15(1‐2).
Beyer, H. G. and H. P. Schwefel (2002). "Evolution strategies–A comprehensive introduction."
Natural computing 1(1): 3‐52.
Bihlmaier, B. F. (1999). Tolerance analysis of flexible assemblies using finite element and spectral
analysis, Brigham Young University. Department of Mechanical Engineering.
Bijker, W. (1997). Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Sociotechnical Change.
Cambridge, MIT.
223
Björck, Å. (1996). Numerical Methods for Least Squares Problems, Siam.
Box, G. E. P. and D. R. Cox (1964). "An analysis of transformations." Journal of the Royal Statistical
Society. Series B (Methodological): 211‐252.
Britten, W. and C. Weber (1999). Transforming ISO 1101 tolerances into vectorial tolerance
representations ‐ a CAD‐based approach. Global Consistency of Tolerances. a. H. K. F. van
Houten, University of Twente, Enschede, The Netherlands.: 93‐100.
Bungartz, H. J. and M. Griebel (2004). "Sparse grids." Acta Numerica 13(1): 147‐269.
Burkardt, J. (2010). 1D quadrature rules for sparse grids, Tech. rep., Interdisciplinary Center for
Applied Mathematics and Information Technology Department, Virginia Tech.
Burkardt, J. (2010). The Combining Coefficient for Anisotropic Sparse Grids. Virginia Tech,
Interdisciplinary Center for Applied Mathematics & Information Technology Department.
Burkardt, J. (2010). Counting Abscissas in Sparse Grids. Virginia Tech, Interdisciplinary Center for
Applied Mathematics and Information Technology Department.
Burton, M., A. Subic, M. Mazur and M. Leary (2010). Systematic design customization of sport
wheelchairs using the Taguchi method. 8th Conference of the International Sports
Engineering Association.
Camelio, J., S. J. Hu and D. Ceglarek (2003). "Modeling variation propagation of multi‐station
assembly systems with compliant parts." Journal of Mechanical Design 125: 673.
Camelio, J. A., S. J. Hu and S. P. Marin (2004). "Compliant assembly variation analysis using
component geometric covariance." Journal of Manufacturing Science and Engineering 126:
355.
Canick, L. (1959). Serrated Clutches and Detents. Product Engineering Design Manual,. D. C.
Greenwood. New York, McGraw‐Hill Book Company Inc.
Carpinetti, L. and D. Chetwynd (1995). "Genetic search methods for assessing geometric tolerances."
Computer Methods in Applied Mechanics and Engineering 122(1): 193‐204.
Chan, L., S. Cheng and F. Spiring (1988). " A New Measure of Process Capability: Cpm." Journal of
Quality Technology 20(3): 162‐175.
Chandra, J. (2001). Statistical Quality Control, CRC Press.
Chase, K. W., and Greenwood, W. H. (1988). "Design Issues in Mechanical Tolerance Analysis." ASME
Manufacturing Review 1(1): 50‐59.
Chase, K. W., W. H. Greenwood, B. G. Loosli and L. F. Hauglund (1990). "Least cost tolerance
allocation for mechanical assemblies with automated process selection." Manufacturing
review 3(Compendex): 49‐59.
Chase, K. W., G. Jinsong and S. P. Magleby (1995). "General 2‐D tolerance analysis of mechanical
assemblies with small kinematic adjustments." Journal of Design and Manufacturing
5(Copyright 1996, IEE): 263‐274.
Chase, K. W. and A. R. Parkinson (1991). "A survey of research in the application of tolerance analysis
to the design of mechanical assemblies." Research in Engineering Design 3(1): 23‐37.
Chiesi, F. and L. Governi (2003). "Tolerance analysis with eM‐TolMate." Transactions of the ASME.
Journal of Computing and Information Science in Engineering 3(Copyright 2003, IEE): 100‐
105.
Cho, B. R., Y. J. Kim, D. L. Kimbler and M. D. Phillips (2000). "An integrated joint optimization
procedure for robust and tolerance design." International Journal of Production Research
38(Compendex): 2309‐2325.
Cho, B. R. and M. S. Leonard (1997). "Identification and extensions of quasiconvex quality loss
functions." International Journal of Reliability, Quality and Safety Engineering. 4(2): 191‐204.
Choi, H.‐G. R., M.‐H. Park and E. Salisbury (2000). "Optimal Tolerance Allocation With Loss
Functions." Journal of Manufacturing Science and Engineering 122(3): 529‐535.
Choi, S. K., R. V. Grandhi and R. A. Canfield (2004). "Structural reliability under non‐Gaussian
stochastic behavior." Computers and Structures 82(13‐14): 1113‐1121.
224
Clément, A., A. Desrochers and A. Riviere (1991). Theory and practice of 3‐D tolerancing for
assembly, École de technologie supérieure.
Clement, A. and A. Riviere (1993). Tolerancing versus nominal modeling in next generation CAD/CAM
system.
Cogorno, G. R. (2006). Geometric dimensioning and tolerancing for mechanical design. New York,
McGraw‐Hill Professional.
Congedo, P. M., R. Abgrall and G. Geraci (2011). "On the use of the Sparse Grid techniques coupled
with Polynomial Chaos."
Connor, T., D. Macklin and J. Offutt (2003). "Nontraditional Phillips‐Fluor business relationship
delivers performance." Pipeline & gas journal 230(5): 16‐18.
Crestaux, T. (2009). "Polynomial chaos expansion for sensitivity analysis." Reliability Engineering &
System Safety 94(7): 1161‐1172.
Cvetko, R., K. Chase and S. Magleby (1998). New Metrics for Evaluating Monte Carlo Tolerance
Analysis of Assemblies. Proceedings of the ASME International Mechanical Engineering
Conference and Exposition. Anaheim, CA, ASME.
D'Errico, J. R. and N. A. Zaino Jr (1988). "Statistical tolerancing using a modification of Taguchi's
method." Technometrics: 397‐405.
D'Errico, J. R. and N. A. Zaino Jr (1988). "Statistical tolerancing using a modification of Taguchi's
method." Technometrics 30(Compendex): 397‐405.
D., Z., G. Chen and Y. Gong (2009). "Research of multidisciplinary optimization based on iSIGHT."
Mechanical & Electrical Engineering Magazine 26(12): 78‐81.
Dahl, D. W., A. Chattopadhyay and G. J. Gorn (2001). "The importance of visualisation in concept
design." Design Studies 22(1): 5‐26.
Darlington, M. and S. Culley (2002). "Current research in the engineering design requirement."
Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering
Manufacture 216(3): 375.
Davidson, J., A. MUIEZINOVIC and J. Shah (2002). "A new mathematical model for geometric
tolerances as applied to round faces." Journal of Mechanical Design 124(4): 609‐622.
Deb, K. (2004). Optimization for engineering design: algorithms and examples, Prentice‐Hall of India.
Deb, K., A. Pratap, S. Agarwal and T. Meyarivan (2002). "A fast and elitist multiobjective genetic
algorithm: NSGA‐II." Evolutionary Computation, IEEE Transactions on 6(2): 182‐197.
Debusschere, B. J., H. N. Najm, P. P. Pébay, O. M. Knio, R. G. Ghanem and O. P. L. Maître (2005).
"Numerical challenges in the use of polynomial chaos representations for stochastic
processes." SIAM Journal on Scientific Computing 26(2): 698‐719.
DeFord, R. (2003). Tolerancing helical springs. Cleveland, OH, Penton Media.
Delves, L. M. and J. L. Mohamed (1988). Computational Methods for Integral Equations, Cambridge
University Press.
Desrochers, A. and A. Clément (1994). "A dimensioning and tolerancing assistance model for
CAD/CAM systems." The International Journal of Advanced Manufacturing Technology 9(6):
352‐361.
Dong, Z. and W. Hu (1991). "Optimal process sequence identification and optimal process tolerance
assignment in computer‐aided process planning." Computers in Industry 17(1): 19‐32.
Dong, Z., W. Hu and D. Xue (1994). "New production cost‐tolerance models for tolerance synthesis.
." Journal of Engineering for Industry (116): 199‐205.
Dong, Z. and A. Soom (1989). Optimal tolerance design with automatic incorporation of
manufacturing knowledge.
Dong, Z. and A. Soom (1990). "Automatic optimal tolerance design for related dimension chains."
Manufacturing review 3(Copyright 1991, IEE): 262‐271.
Dorigo, M., M. Birattari and T. Stutzle (2006). "Ant colony optimization." Computational Intelligence
Magazine, IEEE 1(4): 28‐39.
225
Dupinet, E., M. Balazinski and E. Czogala (1996). "Tolerance allocation based on fuzzy logic and
simulated annealing." Journal of Intelligent Manufacturing 7(6): 487‐497.
Ealey, L. A. (1988). Quality by design: Taguchi Methods and U.S. industry, ASI Press.
Earl, C. F., C. M. Eckert and J. Johnson (2001). Complexity in planning design processes. Proceedings
of the 13th International Conference on Engineering Design: Design Research – Theories,
Methodologies and Product Modelling (ICED'01). Glasgow, UK: 149‐156.
Earl, D. J. and M. W. Deem (2005). "Parallel tempering: Theory, applications, and new perspectives."
Physical Chemistry Chemical Physics 7(23): 3910‐3916.
Ebro, M., T. J. Howard and J. J. Rasmussen (2012). The foundation for robust design: enabling
robustness through kinematic design and design clarity.
Edel, D. H., and T. B. Auer (1964). "Determine the Least Cost Combination for Tolerance
Accumulations in a Drive Shaft Seal Assembly." General Motors Engineering Journal 4: 37‐38.
Eiteljorg, H., K. Fernie and J. Huggett (2003). CAD: a guide to good practice, Oxbow.
Elders (2011). Eleders Limited Annual Report 2011. Australia: pg 18.
Eldred, M. and J. Burkardt (2009). Comparison of non‐intrusive polynomial chaos and stochastic
collocation methods for uncertainty quantification, American Institute of Aeronautics and
Astronautics, 1801 Alexander Bell Dr., Suite 500 Reston VA 20191‐4344 USA.
Eldred, M., C. Webster and P. Constantine (2008). Evaluation of non‐intrusive approaches for
wiener‐askey generalized polynomial chaos.
Elishakoff, I. and Y. Ren (1999). "The bird's eye view on finite element method for structures with
large stochastic variations." Computer Methods in Applied Mechanics and Engineering
168(1): 51‐61.
English, J. and G. Taylor (1993). "Process capability analysis—a robustness study." THE
INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH 31(7): 1621‐1635.
Epperson, J. F. (2007). An introduction to numerical methods and analysis, Wiley‐Interscience.
Ertan, B. (1998). Analysis of key characteristic methods and enablers used in variation risk
management, Massachusetts Institute of Technology.
Evans, D. H. (1975). "Statistical Tolerancing: The State of the Art. Part II: Methods for estimating
moments." Journal of Quality Technology 7(1): 1‐12.
Feigenbaum, A. V. (2012). Total Quality Control, 4th Ed.: Achieving Productivity, Market Penetration,
and Advantage in the Global Economy, McGraw‐Hill.
Felgen, L., F. Deubzer and U. Lindemann (2005). Complexity management during the analysis of
mechatronic systems. Proceedings of the 15th International Conference on Engineering
Design (ICED05). W. L. A. Samuel: 409‐410.
Feng, C.‐X. and R. Balusu (1999). "Robust tolerance design considering process capability and quality
loss." American Society of Mechanical Engineers, Design Engineering Division (Publication)
DE 103(Compendex): 1‐13.
Feng, C.‐X., J. Wang and J.‐S. Wang (2001). "An optimization model for concurrent selection of
tolerances and suppliers." Computers and Industrial Engineering 40(Compendex): 15‐33.
Feng, C. X. and A. Kusiak (1997). "Robust tolerance design with the integer programming approach."
Transactions of the ASME. Journal of Manufacturing Science and Engineering 119(Copyright
1998, IEE): 603‐610.
Fiessler, B., H.‐J. Neumann and R. Rackwitz (1979). "Quadratic limit states in structural reliability."
105(Compendex): 661‐676.
Fischer, A. (1992). "A special Newton‐type optimization method." Optimization 24(3‐4): 269‐284.
Flager, F., B. Welle, P. Bansal, G. Soremekun and J. Haymaker (2009). "Multidisciplinary process
integration and design optimization of a classroom building." Electronic Journal of
Information Technology in Construction 14(Copyright 2010, The Institution of Engineering
and Technology): 595‐612.
Fonseca, C. M. and P. J. Fleming (1995). "An overview of evolutionary algorithms in multiobjective
optimization." Evolutionary computation 3(1): 1‐16.
226
Foo, J., X. Wan and G. E. Karniadakis (2008). "The multi‐element probabilistic collocation method
(ME‐PCM): Error analysis and applications." Journal of Computational Physics 227(22): 9572‐
9595.
Forouraghi, B. (2002). "Worst‐case tolerance design and quality assurance via genetic algorithms."
Journal of Optimization Theory and Applications 113(2): 251‐268.
Fortini, E. T. (1967). Dimensioning for Interchangeable Manufacture. New York, Industrial Press.
Frances, Y. K. and H. S. Ian (2005). Lifting the curse of dimensionality.
Franciosa, P., S. Gerbino and S. Patalano (2011). "Simulation of variational compliant assemblies with
shape errors based on morphing mesh approach." The International Journal of Advanced
Manufacturing Technology 53(1): 47‐61.
Gao, J., K. W. Chase and S. P. Magleby (1995). Comparison of assembly tolerance analysis by Direct
Linearization and modified Monte Carlo simulation methods. Proceedings of the 1995 ASME
Design Engineering Technical Conference, September 17, 1995 ‐ September 20, 1995,
Boston, MA, USA.
Gao, J., K. W. Chase and S. P. Magleby (1998). "Generalized 3‐D tolerance analysis of mechanical
assemblies with small kinematic adjustments." IIE Transactions (Institute of Industrial
Engineers) 30(Compendex): 367‐377.
Gen, M. and R. Cheng (2000). Genetic algorithms and engineering optimization, Wiley‐interscience.
Gerstner, T. and M. Griebel (1998). "Numerical integration using sparse grids." Numerical algorithms
18(3): 209‐232.
Gerstner, T. and M. Griebel (2003). "Dimension–adaptive tensor–product quadrature." Computing
71(1): 65‐87.
Ghanem, R. G. and P. D. Spanos (2003). Stochastic finite elements: a spectral approach, Dover
Publications.
Gordis, J. H. and W. G. Flannelly (1994). "Analysis of stress due to fastener tolerance in assembled
components." AIAA journal 32: 2440‐2446.
Haldar, A. and S. Mahadevan (2000). Probability, reliability, and statistical methods in engineering
design, John Wiley.
Hammersley, J. (1975). Monte Carlo Methods. Norwich, England, Fletcher.
Haupt, R. L., S. E. Haupt and J. Wiley (2004). Practical genetic algorithms, Wiley Online Library.
Hindhede, U. (1983). Machine Design Fundamentals: A practical Approach. New York, John Wiley &
Sons.
Hiriyannaiah, S. and G. M. Mocko (2008). "Information Management Capabilities of MDO
Frameworks." ASME Conference Proceedings 2008(43277): 635‐645.
Hohenbichler, M., S. Gollwitzer, W. Kruse and R. Rackwitz (1987). "New light on first‐ and second‐
order reliability methods." Structural Safety 4(4): 267‐284.
Holland, J. H. (1992). "Genetic algorithms." Scientific american 267(1): 66‐72.
Hong, Y. S. and T. C. Chang (2002). "A comprehensive review of tolerancing research." International
Journal of Production Research 40(Copyright 2002, IEE): 2425‐2459.
Hosder, S., R. W. Walters and M. Balch (2007). Efficient sampling for non‐intrusive polynomial chaos
applications with multiple uncertain input variables.
Hosder, S., R. W. Walters and M. Balch (2007). Efficient sampling for non‐intrusive polynomial chaos
applications with multiple uncertain input variables.
Houten, F., H. Kals and M. Giordano (1999). Mathematical Representation of Tolerance Zones.
Global consistency of tolerances: proceedings of the 6th CIRP International Seminar on
Computer‐Aided Tolerancing, University of Twente, Enschede, The Netherlands, 22‐24
March, 1999, Kluwer Academic.
Hu, M., Z. Lin, X. Lai and J. Ni (2001). "Simulation and analysis of assembly processes considering
compliant, non‐ideal parts and tooling variations." International Journal of Machine Tools
and Manufacture 41(15): 2233‐2243.
227
Huntington, D. E. and C. S. Lyrintzis (1998). "Improvements to and limitations of Latin hypercube
sampling." Probabilistic Engineering Mechanics 13(Compendex): 245‐253.
Hyun Seok, S. and K. Byung Man (2002). "Efficient statistical tolerance analysis for general
distributions using three‐point information." International Journal of Production Research
40(Copyright 2002, IEE): 931‐944.
Iannuzzi, M. and E. Sandgren (1995). Tolerance optimization using genetic algorithms: Benchmarking
with manual analysis.
Imani, B. M. and M. Pour (2009). "Tolerance analysis of flexible kinematic mechanism using DLM
method." Mechanism and Machine Theory 44(2): 445‐456.
ISO (1988). 286‐1:1988 ISO system of limits and fits
ISO (1998). 5458:1998 Geometric Product Specifications (GPS). Positional tolerancing
ISO (2002). ISO 10303: Automation systems and integration ‐ Product data representation and
exchange (STEP), ISO.
ISO (2005). 1101:2005 Geometrical Product Specifications (GPS). Tolerancing of form, orientation,
location and run‐out
ISO (2005). ISO 9000:2005, TC 176/SC Quality management systems ‐‐ Fundamentals and
vocabulary., International Organization for Standardization.
ISO (2006). 16792:2006 Technical product documentation ‐‐ Digital product definition data practices
Jackson, S. L. (2011). Research Methods and Statistics: A Critical Thinking Approach, Cengage
Learning.
Jakeman, J., M. Eldred and D. Xiu (2010). "Numerical approach for quantification of epistemic
uncertainty." Journal of Computational Physics 229(12): 4648‐4663.
Jakeman, J. D., R. Archibald and D. Xiu (2011). "Characterization of discontinuities in high‐
dimensional stochastic problems on adaptive sparse grids." Journal of Computational
Physics.
Jakeman, J. D. and S. G. Roberts (2011). "Local and Dimension Adaptive Sparse Grid Interpolation
and Quadrature." Arxiv preprint arXiv:1110.0010.
Jeang, A. (1994). "Tolerance design: choosing optimal tolerance specifications in the design of
machined parts." Quality and reliability engineering international 10(1): 27‐35.
Jeang, A. (1999). "Optimal tolerance design by response surface methodology." International Journal
of Production Research 37(Compendex): 3275‐3288.
Ji, S., X. Li, Y. Ma and H. Cai (2000). "Optimal tolerance allocation based on fuzzy comprehensive
evaluation and genetic algorithm." The International Journal of Advanced Manufacturing
Technology 16(7): 461‐468.
Johnson, O. and O. T. Johnson (2004). Information Theory And The Central Limit Theorem, Imperial
College Press.
Juran, J. M. (1992). Juran on Quality by Design. New York, The free Press.
Juster, N. P. (1992). "Modelling and representation of dimensions and tolerances: a survey."
Computer‐Aided Design 24(1): 3‐17.
Kalos, M. H. and P. A. Whitlock (2009). Monte Carlo Methods, John Wiley & Sons.
Kanai, S., M. Onozuka and H. Takahashi (1995). Optimal Tolerance Synthesis by Genetic Algorithm
under the Machining and Assembling Constraint.
Kennedy, J. and R. Eberhart (1995). Particle swarm optimization, IEEE.
Keramat, M. and R. Kielbasa (1997). "Latin Hypercube Sampling Monte Carlo estimation of average
quality index for integrated circuits." Analog Integrated Circuits and Signal Processing
14(Compendex): 131‐142.
Kharoufeh, J. and M. Chandra (2002). "Statistical tolerance analysis for non‐normal or correlated
normal component characteristics." International Journal of Production Research 40(2): 337‐
352.
Kirkpatrick, S., C. D. Gelatt Jr and M. P. Vecchi (1983). "Optimization by simulated annealing." science
220(4598): 671‐680.
228
Kiureghian, A. D. and O. Ditlevsen (2009). "Aleatory or epistemic? Does it matter?" Structural Safety
31(2): 105‐112.
Kodiyalam, S. (1998). "Evaluation of methods for multidisciplinary design optimization (MDO), Phase
I." NASA Contractor Report, NASA/CR‐1998‐208716.
Kodiyalam, S. and C. Yuan (2000). "Evaluation of Methods for Multidisciplinary Design Optimization
(MDO), Part II." NASA Contractor Report.
Kovvali, N. (2011). Theory and Applications of Gaussian Quadrature Methods, Morgan & Claypool.
Krishnan, V. and K. T. Ulrich (2001). "Product development decisions: A review of the literature."
Management Science 47(1): 1‐21.
Kumar, M. S. and S. Kannan (2007). "Optimum manufacturing tolerance to selective assembly
technique for different assembly specifications by using genetic algorithm." The
International Journal of Advanced Manufacturing Technology 32(5): 591‐598.
Kuo FY, S. I. (2005). "Lifting the curse of dimensionality." Notices of the AMS 52: 1320–1328.
Lawson, C. L. and R. J. Hanson (1974). Solving Least Squares Problems, SIAM.
Leary, M., J. Gruijters, M. Mazur, A. Subic, M. Burton and F. Fuss (2012). "A fundamental model of
quasi‐static wheelchair biomechanics." Journal of Medical Engineering & Physics 34(9):
1278‐1286.
Leary, M., M. Mazur, J. Gruijters and A. Subic (2010). Benchmarking and optimisation of automotive
seat structures. Sustainable Automotive Technologies 2010: Proceedings of the 2nd
International Conference. J. Wellnitz, Springer: 63‐70.
Leary, M., M. Mazur, J. Gruijters and A. Subic (2011). "Benchmarking and optimisation of automotive
seat structures."
Leary, M., M. Mazur, T. Mild and A. Subic (2011). Optimisation of automotive seat kinematics.
Sustainable Automotive Technologies 2010: Proceedings of the 2nd International
Conference. S. Hung. Greenville, South Carolina, USA, Springer: 139‐144.
Lee, D., K. E. Kwon, J. Lee, H. Jee, H. Yim, S. W. Cho, J. G. Shin and G. Lee (2009). "Tolerance Analysis
Considering Weld Distortion by Use of Pregenerated Database." Journal of Manufacturing
Science and Engineering 131: 041012.
Lee, D. J. and A. C. Thornton (1996). The identification and use of key characteristics in the product
development process. The 1996 ASME Design Engineering Technical Conferences and
Computers in Engineering Conference August 18‐22, 1996,. Irvine, California, ASME.
Lee, S. and W. Chen (2009). "A comparative study of uncertainty propagation methods for black‐box‐
type problems." Structural and Multidisciplinary Optimization 37(Copyright 2009, The
Institution of Engineering and Technology): 239‐253.
Lee SH, C. W. (2009). "A comparative study of uncertainty propagation methods." Struct Multidisc
Optim 37: 239–253.
Lehtihet, E., E. Gunasena and I. Ham (1991). An update on statistical tolerance control methods and
computations.
Lesser, M. (2000). Analysis of Complex Nonlinear Mechanical Systems: A Computer Algebra Assisted
Approach, World Scientific Pub Co Inc.
Levy, S. (1953). "Structural analysis and influence coefficients for delta wings." J. Aero. Sci 20(7): 449‐
454.
Li, M. and P. M. B. Vitányi (2008). An introduction to Kolmogorov complexity and its applications,
Springer.
Lin, B.‐W. and J.‐S. Chen (2005). "Corporate technology portfolios and R&D performance measures: a
study of technology intensive firms." R&D Management 35(2): 157‐170.
Liu, M., Z. Gao and J. S. Hesthaven (2011). "Adaptive sparse grid algorithms with applications to
electromagnetic scattering under uncertainty." Applied numerical mathematics 61(1): 24‐37.
Liu, S., S. Hu and T. Woo (1996). "Tolerance analysis for sheet metal assemblies." Journal of
Mechanical Design 118: 62.
229
Liu, S. C. and S. J. Hu (1997). "Variation simulation for deformable sheet metal assemblies using finite
element methods." Journal of Manufacturing Science and Engineering 119: 368.
Liu, S. C., H. W. Lee and S. J. Hu (1995). "Variation simulation for deformable sheet metal assemblies
using mechanistic models." TRANSACTIONS‐NORTH AMERICAN MANUFACTURING
RESEARCH INSTITUTION OF SME: 235‐240.
Lovasz, E. C. (2012). Mechanisms, Transmissions and Applications, Springer London, Limited.
Lovett, T. E., F. Ponci and A. Monti (2006). "A polynomial chaos approach to measurement
uncertainty." IEEE Transactions on Instrumentation and Measurement 55(Copyright 2006,
The Institution of Engineering and Technology): 729‐736.
Makelainen, E., Y. Ramseier, S. Salmensuu, J. Heilala, P. Voho and O. Vaatainen (2001). Assembly
process level tolerance analysis for electromechanical products. 2001 IEEE International
Symposium on Assembly and Task Planning (ISATP2001), May 28, 2001 ‐ May 29, 2001,
Fukuoka, Japan, Institute of Electrical and Electronics Engineers Computer Society.
Malone, B. (2001). "Building Automated Processes Using ModelCenter." Integrated Enterprise 2(1):
25‐27.
Malone, B. and M. Papay (1999). ModelCenter: an integration environment for simulation based
design. Simulation Interoperability Workshop.
Mansoor, E. M. (1963). "The application of probability to tolerances used in engineering design."
Proceedings of the Institution of Mechanical Engineers 178: 29‐44.
Mazur, M., M. Leary, S. Huang, T. Baxter and A. Subic (2011). Benchmarking study of automotive
seat track sensitivity to manufacturing variation. Proceedings of the 18th International
Conference on Engineering Design (ICED11). S. J. H. Culley, B.J.; McAloone, T.C.; Howard, T.J.
& Dong, A. Copenhagen, Denmark, The Design Society. Vol. 10: 456‐465.
Mazur, M., M. Leary and A. Subic (2010). Automated simulation of stochastic part variation to
identify key performance characteristics of assemblies. 6th Innovative Production Machines
and Systems 2010 (IPROMS 2010) Conference.
Mazur, M., M. Leary and A. Subic (2011). "Computer Aided Tolerancing (CAT) platform for the design
of assemblies under external and internal forces." Computer‐Aided Design 43(6): 707‐719.
McAfee, R. P. and J. McMillan (1995). "Organizational Diseconomies of scale." Journal of Economics
& Management Strategy 4(3): 399‐426.
McCormack Jr, D., I. R. Harris, A. M. Hurwitz and P. D. Spagon (2000). "Capability indices for non‐
normal data." Quality Engineering 12(4): 489‐495.
McKay, M., R. J. Beckman and W. J. Conover (1979). "Comparison of three methods for selecting
values of input variables in the analysis of output from a computer code." Technometrics
21(2): 239‐245.
McKay MD, B. R., Conover WJ (1979). "A Comparison of Three Methods for Selecting Values of Input
Variables in the Analysis of Output from a Computer Code." Technometrics (American
Statistical Association) 2(21): 239–245.
McRae, G. J. and M. A. Tatang (1995). Direct incorporation of uncertainty in chemical and
environmental engineering systems, Massachusetts Institute of Technology.
Merkley, K., K. Chase and E. Perry (1996). An introduction to tolerance analysis of flexible
assemblies.
Merkley, K. G. (1998). Tolerance analysis of compliant assemblies, Brigham Young University.
Michael, W., and Siddall, J. N. (1981). "The optimization problem with optimal tolerance
assignment." J. of Mechanical Design, ASME 103(Oct. 1981): 842‐848.
Miller, F. P., A. F. Vandome and J. McBrewster (2010). Central Limit Theorem, Alphascript Publishing.
Mitra, A. (1998). Fundamentals of quality control and improvement, 2nd Edition Prentice‐Hall.
Mo, Z., Y. He, G. Wu and J. Wu (2010). The Application of BP Neural Network in GridGain Grid
Computing Environment. Measuring Technology and Mechatronics Automation (ICMTMA),
2010 International Conference on, IEEE.
230
Montgomery, D. C. (2001). Introduction to Statistical Quality Control 4th edition. New York, John
Wiley & Sons.
Morokoff, W. J. and R. E. Caflisch (1994). "Quasi‐random sequences and their discrepancies." SIAM
Journal on Scientific Computing 15(6): 1251‐1279.
Morse, E. P. and X. You (2005). "Implementation of GapSpace Analysis." ASME Conference
Proceedings 2005(42150): 329‐333.
Mujezinovic, A., J. K. Davidson and J. J. Shah (2004). "A New Mathematical Model for Geometric
Tolerances as Applied to Polygonal Faces." Journal of Mechanical Design 126(3): 504‐518.
Niederreiter, H. (1992). Quasi‐Monte Carlo Methods, Wiley Online Library.
Nigam SD, T. U. (1995). " Review of statistical approaches to tolerance analysis." Computer‐Aided
Design 27 6‐15.
Nigam, S. D. and J. U. Turner (1995). "Review of statistical approaches to tolerance analysis." CAD
Computer Aided Design 27(Compendex): 6‐15.
Nocedal, J. and S. J. Wright (1999). Numerical optimization, Springer verlag.
Norton, R. L. (2003). Design of Machinery: An Introduction to the Synthesis and Analysis of
Mechanisms and Machines, McGraw‐Hill Higher Education.
Oakland, J. S. (2007). Statistical process control, Elsevier.
Ostwald, P. and J. Huang (1977). "A method for optimal tolerance selection." Journal of Engineering
for Industry 99(3): 558‐565.
Owen, S. J. and M. L. Staten (2010). A Comparison of Mesh Morphing Methods for Shape
Optimization. Albuquerque, NM, U.S.A, Sandia National Laboratories.
Padula, S., J. Korte, H. Dunn and A. Salas (1999). Multidisciplinary optimization branch experience
using iSIGHT software, Citeseer.
Pahl, G., K. Wallace and L. Blessing (2007). Engineering Design: A Systematic Approach, Springer.
Pan, J., Y. Le Biannic and F. Magoules (2010). Parallelizing multiple group‐by query in share‐nothing
environment: a mapreduce study case. Proceedings of the 19th ACM International
Symposium on High Performance Distributed Computing, ACM.
Parashar, S. S. and N. Fateh (2007). Multi‐objective MDO solution strategy for multidisciplinary
design using modefrontier. Inverse Problems, Design and Optimization Symposium Miami,
Florida, U.S.A.
Park, S. (1996). Robust Design and Analysis for Quality Engineering, Springer.
Parkinson, D. (1982). "The application of reliability methods to tolerancing." Journal of Mechanical
Design: Transactions of the ASME 104: 612‐618.
Parkinson, D. (1985). "Assessment and optimization of dimensional tolerances." Computer‐Aided
Design 17(4): 191‐199.
Parkinson, D. B. (1982). "The application of reliablility methods to tolerancing." ASME Transactions.
Journal of Mechanical Design 104: 612‐618.
Pascoe, N. (2011). Reliability Technology: Principles and Practice of Failure Prevention in Electronic
Systems, John Wiley & Sons.
Patel, A. M. (1980). Computer‐Aided Assignment of Manufacturing Tolerances. Proc. of the 17th
Design Automation Conf, Minneapolis, Minn.
Pearn, W. and S. Kotz (2006). Encyclopedia and handbook of process capability indices: a
comprehensive exposition of quality control measures. New Jersey, World Scientific.
Pearn WL, K. S. (2006). Encyclopedia And Handbook of Process Capability Indices: A Comprehensive
Exposition of Quality Control Measures (Series on Quality, Reliability and Engineering
Statistics). New Jersey, World Scientific Publishing.
Phadke, M. S. (1989). Quality engineering using robust design, Prentice Hall.
Pierre, L., D. Teissandier and J. P. Nadeau (2009). "Integration of thermomechanical strains into
tolerancing analysis." International Journal on Interactive Design and Manufacturing 3(4):
247‐263.
231
Piperni, P., M. Abdo and F. Kafyeke (2004). The application of multi‐disciplinary optimization
technologies to the design of a business jet. Proceeding of the 10th AIAA/ISSMO
Multidisciplinary Analysis and Optimization Conference.
Polini, W. (2011). To Carry Out Tolerance Analysis of an Aeronautic Assembly Involving Free Form
Surfaces in Composite Material. Advances in Composite Materials ‐ Ecodesign and Analysis.
Prisco, U. and G. Giorleo (2002). "Overview of current CAT systems." Integrated Computer‐Aided
Engineering 9(Copyright 2003, IEE): 373‐387.
Pugh, S. (1991). Total design: integrated methods for successful product engineering, Addison‐
Wesley Pub. Co.
Rahman, S. and D. Wei (2006). "A univariate approximation at most probable point for higher‐order
reliability analysis." International Journal of Solids and Structures 43(Compendex): 2820‐
2839.
Rahman, S. and H. Xu (2004). "A univariate dimension‐reduction method for multi‐dimensional
integration in stochastic mechanics." Probabilistic Engineering Mechanics 19(Compendex):
393‐408.
Rao, P. N. (2004). CAD/CAM: Principles and Applications, McGraw‐Hill Education (India) Pvt Limited.
Rao, Y., C. Rao, G. R. Janardhana and P. R. Vundavilli (2011). "Simultaneous Tolerance Synthesis for
Manufacturing and Quality using Evolutionary Algorithms." International Journal of Applied
Evolutionary Computation (IJAEC) 2(2): 1‐20.
Rawat, S. and L. Rajamani (2009). Experiments with CPU Scheduling Algorithm on a Computational
Grid. Advance Computing Conference, 2009. IACC 2009. IEEE International, IEEE.
Rosato, D. V. and M. G. Rosato (2000). Injection molding handbook, Kluwer Academic.
Roweis, S. (1996). "Levenberg‐marquardt optimization." Notes, University Of Toronto.
Roy, U. and Y. C. Fang (1997). "Optimal tolerance re‐allocation for the generative process sequence."
IIE transactions 29(1): 37‐44.
Roy, U. and B. Li (1998). "Representation and interpretation of geometric tolerances for polyhedral
objects‐‐I. Form tolerances." Computer‐Aided Design 30(2): 151‐161.
Roy, U., C. R. Liu and T. C. Woo (1991). "Review of dimensioning and tolerancing: representation and
processing." Computer‐Aided Design 23(7): 466‐483.
Rubinstein, R. (1981). Simulation and the Monte Carlo Method, John Wiley & Sons, Inc.
SAE (1997). SAE HS‐795 ‐ manual on design and application of helical and spiral springs, Society of
Automotive Engineers.
Salas Gonzalez, D., J. Górriz, J. Ramírez, A. Lassl and C. Puntonet (2008). "Improved Gauss‐Newton
optimisation methods in affine registration of SPECT brain images." Electronics Letters
44(22): 1291‐1292.
Salomons, O., H. Jonge Poerink, F. Slooten, F. Houten and H. Kals (1995). A computer aided
tolerancing tool based on kinematic analogies, Citeseer.
Salomons, O. W., F. van Houten and H. Kals (1998). "Current status of CAT systems." Geometric
design tolerancing: theories, standards and applications: 438‐452.
Schittkowski, K. (1983). "On the convergence of a sequential quadratic programming method with an
augmented lagrangian line search function 2." Optimization 14(2): 197‐216.
Schoutens, W. (2000). Stochastic processes and orthogonal polynomials, Springer.
Shah, J. J., G. Ameta, Z. Shen and J. Davidson (2007). "Navigating the tolerance analysis maze."
Computer‐Aided Design and Applications 4(Compendex): 705‐718.
Shah, J. J. and M. Mäntylä (1995). Parametric and Feature‐Based Cad/Cam: Concepts, Techniques,
and Applications, Wiley.
Shah, J. J. and B. C. Zhang (1992). Attributed graph model for geometric tolerancing.
Shen, Z. (2005). Development of a framework for a set of computer‐aided tools for tolerance
analysis. PhD Thesis, Arizona State University.
Shen, Z., G. Ameta, J. J. Shah and J. K. Davidson (2005). "A comparative study of tolerance analysis
methods." Journal of Computing and Information Science in Engineering 5: 247.
232
Shigley, J. E., C. R. Mischke and R. G. Budynas (2004). Mechanical Engineering Design, McGraw‐Hill.
Shiu, B., D. Apley, D. Ceglarek and J. Shi (2003). "Tolerance allocation for compliant beam structure
assemblies." IIE transactions 35(4): 329‐342.
Sigmetrix (2012). CETOL. L. CYBERNET SYSTEMS CO.: CAT software tool.
Simpson, T. W., V. Toropov, V. Balabanov and F. A. C. Viana (2008). Design and analysis of computer
experiments in multidisciplinary design optimization: a review of how far we have come or
not.
Singh, J. (2003). Key characteristic coupling and resolving key characteristic conflict, Massachusetts
Institute of Technology.
Singh, P., P. Jain and S. Jain (2004). "A genetic algorithm‐based solution to optimal tolerance
synthesis of mechanical assemblies with alternative manufacturing processes: focus on
complex tolerancing problems." International Journal of Production Research 42(24): 5185‐
5215.
Singh, P. K., P. K. Jain and S. C. Jain (2009). "Important issues in tolerance design of mechanical
assemblies. Part 2: Tolerance synthesis." Proceedings of the Institution of Mechanical
Engineers, Part B: Journal of Engineering Manufacture 223(10): 1249‐1287.
Skowronski, V. J. and J. U. Turner (1997). "Using Monte‐Carlo variance reduction in statistical
tolerance synthesis." CAD Computer Aided Design 29(Compendex): 63‐69.
Smith, R. P. and S. D. Eppinger (1997). "Identifying controlling features of engineering design
iteration." Management Science: 276‐293.
Smolyak, S. A. (1963). "Quadrature and interpolation formulas for tensor products of certain classes
of functions." Dokl. Akad. Nauk SSSR 4: 240‐243.
Sobieszczanski‐Sobieski, J. and R. T. Haftka (1997). "Multidisciplinary aerospace design optimization:
survey of recent developments." Structural and Multidisciplinary Optimization 14(1): 1‐23.
Soderberg, R. (1993). Tolerance allocation considering customer and manufacturer objectives. 14th
Biennial Conference on Mechanical Vibration and Noise, September 19, 1993 ‐ September
22, 1993, Albuquerque, NM, USA, Publ by ASME.
Soderberg, R. (1994). "Robust design by tolerance allocation considering quality and manufacturing
cost." Advances in Design Automation 69: 219‐226.
Soderberg, R. (1994). Robust design by tolerance allocation considering quality and manufacturing
cost. Proceedings of the 1994 ASME Design Technical Conferences. Part 1 (of 2), September
11, 1994 ‐ September 14, 1994, Minneapolis, MN, USA, ASME.
Soderberg, R. and L. Lindkvist (1999). "Computer aided assembly robustness evaluation." Journal of
Engineering Design 10(2): 165‐181.
Soderberg, R. and L. Lindkvist (1999). Two‐step procedure for robust design using CAT technology,
Kluwer Academic Publishers.
Somerville, S. E. and D. C. Montgomery (1996). "Process capability indices and non‐normal
distributions." Quality Engineering 9(2): 305‐316.
Speckhart, F. H. (1972). "Calculation of Tolerance Based on a Minimum Cost Approach." J. of
Engineering for Industry, ASME 94(May 1972): 447‐453.
Spotts, M. F. (1973). "Allocation of Tolerances to Minimize Cost of Assembly." J. of Engineering for
Industry, ASME 95(Aug. 1973): 762‐764.
Stapelberg, R. F. (2009). Handbook of Reliability, Availability, Maintainability and Safety in
Engineering Design, Springer.
Steiner, G. and D. Watzenig (2003). Particle swarm optimization for worst case tolerance design,
IEEE.
Stonier, R. J. and X. H. Yu (1994). Complex systems: mechanism of adaptation, IOS Press.
Stroud, I. and H. Nagy (2011). Solid Modelling and CAD Systems: How to Survive a CAD System,
Springer.
Suh, N. P. (1990). The principles of design, Oxford University Press New York.
233
Summers, J. D. and J. J. Shah (2010). "Mechanical Engineering Design Complexity Metrics: Size,
Coupling, and Solvability." Journal of Mechanical Design 132(2): 021004.
Sutherland, G. H., and Roth, B. (1975). "Mechanism Design: Accounting for Manufacturing
Tolerances and Costs in Function Generating Problems." J. of Engineering for Industry, ASME
Conference Proceedings 97(Feb. 1975): 283‐286.
Taguchi, G. (1978). "Performance Analysis Design." International Journal of Production Research 16:
512‐530.
Taguchi, G. (1989). Introduction to Quality Engineering. New York, Asian Productivity Organization,
Unipub.
Taguchi, G. (1993). Taguchi on robust technology development: bringing quality engineering
upstream, ASME Press.
Taguchi, G. and S. Chowdhury (1999). Robust Engineering: Learn How to Boost Quality While
Reducing Costs & Time to Market, McGraw‐Hill.
Takezawa, N. (1980). "An improved method for establishing the process‐wise quality standard." Rep.
Stat. Appl. Res., JUSE 27(3): 63‐75.
Terejanu, G., P. Singla, T. Singh and P. D. Scott (2010). Approximate propagation of both epistemic
and aleatory uncertainty through dynamic systems. Information Fusion (FUSION), 2010 13th
Conference on.
Thompson, A., P. Layzell and R. S. Zebulum (1999). "Explorations in design space: Unconventional
electronics design through artificial evolution." Evolutionary Computation, IEEE Transactions
on 3(3): 167‐196.
Thornton, A. C. (1999). "A Mathematical Framework for the Key Characteristic Process." Research in
Engineering Design 11(3): 145‐157.
Tomiyama, T., P. Gu, Y. Jin, D. Lutters, C. Kind and F. Kimura (2009). "Design methodologies:
Industrial and educational applications." CIRP Annals ‐ Manufacturing Technology 58(2): 543‐
565.
Turner, J. and A. Gangoiti (1991). "Tolerance analysis approaches in commercial software."
Concurrent Engineering 1(2): 11‐23.
Twigge‐Molecey, C. (2003). Knowledge, technology and profit, Citeseer.
Ullman, D. G. (2003). The mechanical design process, McGraw‐Hill.
Vinckenroy, G. V. and W. P. D. Wilde (1995). "The use of Monte Carlo techniques in statistical finite
element methods for the determination of the structural behaviour of composite materials
structural components." Composite Structures 32(1): 247‐254.
Voelcker, H. B. (1998). "The current state of affairs in dimensional tolerancing: 1997." Integrated
Manufacturing Systems 9(4): 205‐217.
Voelcker, H. B., C. U. S. S. o. Mechanical and A. Engineering (1993). A Current Perspective on
Tolerancing and Metrology, Sibley School of Mechanical and Aerospace Engineering, Cornell
University.
Wade, O. R. (1967). Tolerance Control in Design and Manufacturing. New York, Industrial Press.
Wahl, A. (1963). Mechanical Springs New York, McGraw‐Hill.
Wan, X. and G. E. Karniadakis (2007). "Multi‐element generalized polynomial chaos for arbitrary
probability measures." SIAM Journal on Scientific Computing 28(3): 901‐928.
Wang, H., G. He, M. Xia, F. Ke and Y. Bai (2004). "Multiscale coupling in complex mechanical
systems." Chemical engineering science 59(8): 1677‐1686.
Weiner, N. (1938). "The homogeneous chaos." Amer. J. Math 60(4): 897‐‐936.
Weirs, V. G., J. R. Kamm, L. P. Swiler, S. Tarantola, M. Ratto, B. M. Adams, W. J. Rider and M. S.
Eldred "Sensitivity analysis techniques applied to a system of hyperbolic conservation laws."
Reliability Engineering & System Safety(0).
Wenzhen, H., T. Phoomboplab and D. Ceglarek (2009). "Process capability surrogate model‐based
tolerance synthesis for multi‐station manufacturing systems." IIE transactions 41(4): 309‐
322.
234
Wilde, D. and E. Prentice (1975). "Minimum exponential cost allocation of sure‐fit tolerances."
Journal of Engineering for Industry 97(4): 1395‐1398.
Williams, J. A. (1994). Engineering Tribology, Cambridge University Press.
Wirtz, A. (1993). Vectorial tolerancing: a basic element for quality control.
Wirtz, A., C. Gächter and D. Wipf (1993). "From unambiguously defined geometry to the perfect
quality control loop." CIRP Annals‐Manufacturing Technology 42(1): 615‐618.
Wojtkiewicz, S., M. Eldred, R. Field, A. Urbina and J. Red‐Horse (2001). "Uncertainty quantification in
large computational engineering models." American Institute of Aeronautics and
Astronautics 14.
Wood, D. A. (2006). "Making better springs using aspects of chaos theory." Proceedings of the
Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science
220(3): 253‐259.
Wright, P. A. (1995). "A process capability index sensitive to skewness." Journal of Statistical
Computation and Simulation 52(3): 195‐203.
Wu, F., J.‐Y. Dantan, A. Etienne, A. Siadat and P. Martin (2009). "Improved algorithm for tolerance
allocation based on Monte Carlo simulation and discrete optimization." Computers &
Industrial Engineering 56(4): 1402‐1413.
Wu, Y. and A. Wu (2000). Taguchi methods for robust design, ASME Press.
Wu, Z., W. H. Eimaraghy and H. A. Eimaraghy (1988). "Evaluation of cost‐tolerance algorithms for
design tolerance analysis and synthesis." Manufacturing review 1(Compendex): 168‐179.
Wu, Z., W. H. El Maraghy and H. A. El Maraghy (1988). "Evaluation of Cost‐Tolerance Algorithms for
Design Tolerance Analysis and Synthesis." Manufacturing Review, ASME 1(3): 168‐179.
Xiu, D. (2007). "Efficient collocational approach for parametric uncertainty analysis."
Communications in Computational Physics 2(2): 293‐309.
Xiu, D. (2010). Numerical Methods for Stochastic Computations: A Spectral Method Approach,
Princeton University Press.
Xiu, D. and G. Karniadakis (2003). "The Wiener‐Askey polynomial chaos for stochastic differential
equations." SIAM Journal on Scientific Computing 24(2): 619‐644.
Xiu, D., Karniadakis, G. (2003). "Modeling uncertainty in flow simulations via generalized polynomial
chaos." Journal of Computational Physics 187(Copyright 2003, IEE): 137‐167.
Ye, B. and F. A. Salustri (2003). "Simultaneous tolerance synthesis for manufacturing and quality."
Research in Engineering Design 14(2): 98‐106.
You, X. (2008). GapSpace multi‐dimensional assembly analysis, The University of North Carolina.
Yu, K. M., S. T. Tan and M. F. Yuen (1994). "A review of automatic dimensioning and tolerancing
schemes." Engineering with Computers 10(2): 63‐80.
Zhang, C., J. Luo and B. Wang (1999). "Statistical tolerance synthesis using distribution function
zones." International Journal of Production Research 37(17): 3995‐4006.
Zhang, C. and H. P. Wang (1993). "The discrete tolerance optimization problem." Manufacturing
review 6: 60‐60.
Zhang, C. and H. P. Wang (1993). "Integrated tolerance optimisation with simulated annealing."
International Journal of Advanced Manufacturing Technology 8(Copyright 1993, IEE): 167‐
174.
Zhang, C. and H. P. Wang (1997). "Robust design of assembly and machining tolerance allocations."
IIE transactions 30(1): 17‐29.
Zheng, L. Y., C. A. McMahon, L. Li, L. Ding and J. Jamshidi (2008). "Key characteristics management in
product lifecycle management: a survey of methodologies and practices." Proceedings of the
Institution of Mechanical Engineers, Part B (Journal of Engineering Manufacture)
222(Copyright 2009, The Institution of Engineering and Technology): 989‐1008.
Zhengshu, S. (2003). "Tolerance analysis with EDS/VisVSA." Transactions of the ASME. Journal of
Computing and Information Science in Engineering 3(Copyright 2003, IEE): 95‐99.
235
Zhou, C., L. Gao, H. B. Gao and K. Zan (2006). "Particle swarm optimization for simultaneous
optimization of design and machining tolerances." Simulated Evolution and Learning: 873‐
880.
Zhou, S. and D. Ceglarek (2008). "Variation source identification in manufacturing processes based
on relational measurements of key product characteristics." Journal of Manufacturing
Science and Engineering 130: 031007‐031001.
Zienkiewicz, O. C., R. L. Taylor and J. Z. Zhu (2005). The Finite Element Method: Its Basis And
Fundamentals, Elsevier Butterworth‐Heinemann.
Zitzler, E. and L. Thiele (1999). "Multiobjective evolutionary algorithms: A comparative case study
and the strength pareto approach." Evolutionary Computation, IEEE Transactions on 3(4):
257‐271.
Zou, Z. and E. P. Morse (2004). "A gap‐based approach to capture fitting conditions for mechanical
assembly." Computer‐Aided Design 36(8): 691‐700.
236