|
1. |
Zero-Inflated Poisson Regression, With an Application to Defects in Manufacturing |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 1-14
Diane Lambert,
Preview
|
PDF (1354KB)
|
|
摘要:
Zero-inflated Poisson (ZIP) regression is a model for count data with excess zeros. It assumes that with probabilitypthe only possible observation is 0, and with probability 1 –p, a Poisson(λ) random variable is observed. For example, when manufacturing equipment is properly aligned, defects may be nearly impossible. But when it is misaligned, defects may occur according to a Poisson(λ) distribution. Both the probabilitypof the perfect, zero defect state and the mean number of defects λ in the imperfect state may depend on covariates. Sometimespand λ are unrelated; other timespis a simple function of λ such asp= l/(1 + λT) for an unknown constantT. In either case, ZIP regression models are easy to fit. The maximum likelihood estimates (MLE's) are approximately normal in large samples, and confidence intervals can be constructed by inverting likelihood ratio tests or using the approximate normality of the MLE's. Simulations suggest that the confidence intervals based on likelihood ratio tests are better, however. Finally, ZIP regression models are not only easy to interpret, but they can also lead to more refined data analyses. For example, in an experiment concerning soldering defects on printed wiring boards, two sets of conditions gave about the same mean number of defects, but the perfect state was more likely under one set of conditions and the mean number of defects in the imperfect state was smaller under the other set of conditions; that is, ZIP regression can show not only which conditions give lower mean number of defects but also why the means are lower.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485228
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
2. |
Screening, Predicting, and Computer Experiments |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 15-25
WilliamJ. Welch,
Robert.J. Buck,
Jerome Sacks,
HenryP. Wynn,
TobyJ. Mitchell,
MaxD. Morris,
Preview
|
PDF (1155KB)
|
|
摘要:
Many scientific phenomena are now investigated by complex computer models or codes. Given the input values, the code produces one or more outputs via a complex mathematical model. Often the code is expensive to run, and it may be necessary to build a computationally cheaper predictor to enable, for example, optimization of the inputs. If there are many input factors, an initial step in building a predictor is identifying (screening) the active factors. We model the output of the computer code as the realization of a stochastic process. This model has a number of advantages. First, it provides a statistical basis, via the likelihood, for a stepwise algorithm to determine the important factors. Second, it is very flexible, allowing nonlinear and interaction effects to emerge without explicitly modeling such effects. Third, the same data are used for screening and building the predictor, so expensive runs are efficiently used. We illustrate the methodology with two examples, both having 20 input variables. In these examples, we identify the important variables, detect curvature and interactions, and produce a useful predictor with 30–50 runs of the computer code.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485229
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
3. |
Response Surface Models With Random Block Effects |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 26-37
A.I. Khuri,
Preview
|
PDF (1270KB)
|
|
摘要:
In many experimental situations, a response surface design is divided into several blocks to control an extraneous source of variation. The traditional approach in most response surface applications is to treat the block effect as fixed in the assumed model. There are, however, situations in which it is more appropriate to consider the block effect as random. This article is concerned with inference about a response surface model in the presence of a random block effect. Since this model also contains fixed polynomial effects, it is considered to be a mixed-effects model. The main emphasis of the proposed analysis is on estimation and testing of the fixed effects. A two-stage mixed-model procedure is developed for this purpose. The variance components due to the random block effect and the experimental error are first estimated and then used to obtain the generalized least squares estimator of the fixed effects. This procedure produces the so-called Yates combined intra- and inter-block estimator. By contrast, the Yates intra-block estimator is the one obtained when the block effect is treated as fixed. In particular, if the response surface design blocks orthogonally, then the two estimators are shown to be identical. An experiment on bonding galvanized steel bars is used to motivate the problem and illustrate the results.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485230
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
4. |
Case-Deletion Diagnostics for Mixed Models |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 38-45
Ronald Christensen,
LarryM. Pearson,
Wesley Johnson,
Preview
|
PDF (830KB)
|
|
摘要:
Mixed linear models arise in many areas of application. Standard estimation methods for mixed models are sensitive to bizarre observations. Such influential observations can completely distort an analysis and lead to inappropriate actions and conclusions. We develop case-deletion diagnostics for detecting influential observations in mixed linear models. Diagnostics for both fixed effects and variance components are proposed. Computational formulas are given that make the procedures feasible. The methods are illustrated using examples.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485231
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
5. |
A Multivariate Exponentially Weighted Moving Average Control Chart |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 46-53
CynthiaA. Lowry,
WilliamH. Woodall,
CharlesW. Champ,
StevenE. Rigdon,
Preview
|
PDF (917KB)
|
|
摘要:
A multivariate extension of the exponentially weighted moving average (EWMA) control chart is presented, and guidelines given for designing this easy-to-implement multivariate procedure. A comparison shows that the average run length (ARL) performance of this chart is similar to that of multivariate cumulative sum (CUSUM) control charts in detecting a shift in the mean vector of a multivariate normal distribution. As with the Hotelling's χ2and multivariate CUSUM charts, the ARL performance of the multivariate EWMA chart depends on the underlying mean vector and covariance matrix only through the value of the noncentrality parameter. Worst-case scenarios show that Hotelling's χ2charts should always be used in conjunction with multivariate CUSUM and EWMA charts to avoid potential inertia problems. Examples are given to illustrate the use of the proposed procedure.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485232
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
6. |
Analysis of CUSUM and Other Markov-type Control Schemes by Using Empirical Distributions |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 54-63
Emmanuel Yashchin,
Preview
|
PDF (1200KB)
|
|
摘要:
The run-length distribution of a cumulative sum control scheme, when the underlying distribution of the incoming observations is unknown, is discussed. Given a sample of sizenfrom this distribution, the estimators related to various characteristics of the run length can be obtained by using the empirical cdf instead of the true cdf in the standard analysis procedure. The article discusses the properties of the resulting point estimators, as well as interval estimators obtained by using resampling techniques. Applications of the technique for the purpose of design and analysis of control schemes are also discussed. The proposed methodology can be easily adapted for other Markov-type control schemes.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485233
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
7. |
An SPC Model for Short Production Runs: Minimizing Expected Cost |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 64-73
StephenV. Crowder,
Preview
|
PDF (979KB)
|
|
摘要:
In some manufacturing situations, the assumption of a long production run may not be appropriate. For example, job shops typically will not have the benefit of large production runs. Much of the literature on the economic design of control charts, however, assumes an effectively infinite production run. A finite-horizon or short-production-run version of an economic-process-control model of Bather and Box and Jenkins is considered here. An algorithm is derived that allows implementation of this model and adjustment strategy for the short-production-run case. The solution to the control problem is consistent with traditional statistical process control philosophy in that process adjustment is called for only when the process mean is substantially off target. The control or adjustment limits for this model are time-varying and depend on the break-even points between quadratic cost for being off target and fixed adjustment cost. It is shown that the length of the production run can greatly influence the control or adjustment strategy. Use of control limits based on the assumption of an infinite-run process can significantly increase total expected cost.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485234
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
8. |
Models for Variable-Stress Accelerated Life Testing Experiments Based on Wener Processes and the Inverse Gaussian Distribution |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 74-82
KjellA. Doksum,
Arnljot Hbyland,
Preview
|
PDF (893KB)
|
|
摘要:
Variable-stress accelerated life testing trials are experiments in which each of the units in a random sample of units of a product is run under increasingly severe conditions to get information quickly on its life distribution. We consider a fatigue failure model in which accumulated decay is governed by a continuous Gaussian processW(y) whose distribution changes at certain stress change points to <tl< < … <tk, Continuously increasing stress is also considered. Failure occurs the first timeW(y) crosses a critical boundary ω. The distribution of time to failure for the models can be represented in terms of time-transformed inverse Gaussian distribution functions, and the parameters in models for experiments with censored data can be estimated using maximum likelihood methods. A common approach to the modeling of failure times for experimental units subject to increased stress at certain stress change points is to assume that the failure times follow a distribution that consists of segments of Weibull distributions with the same shape parameter. Our Wiener-process approach gives an alternative flexible class of time-transformed inverse Gaussian models in which time to failure is modeled in terms of accumulated decay reaching a critical level and in which parametric functions are used to express how higher stresses accelerate the rate of decay and the time to failure. Key parameters such as mean life under normal stress, quantiles of the normal stress distribution, and decay rate under normal and accelerated stress appear naturally in the model. A variety of possible parameterizations of the decay rate leads to flexible modeling. Model fit can be checked by percentage-percentage plots.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485235
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
9. |
Testing Reliability in a Stress-Strength Model WhenXandYare Normally Distributed |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 83-91
Samaradasa Weerahandi,
RichardA. Johnson,
Preview
|
PDF (858KB)
|
|
摘要:
We consider the stress-strength problem in which a unit of strengthXis subjected to environmental stressY. An important problem in stress-strength reliability concerns testing hypotheses about the reliability parameterR=P[X>yl. In this article, we consider situations in whichXandYare independent and have normal distributions or can be transformed to normality. We do not require the two population variances to be equal. Our approach leads to test statistics which are exactpvalues that are represented as one-dimensional integrals. On the basis of thepvalue, one can also construct approximate confidence intervals for the parameter of interest. We also present an extension of the testing procedure to the case in which both strength and stress depend on covariates. For comparative purposes, the Bayesian solution to the problem is also presented. We use data from a rocket-motor experiment to illustrate the procedure.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485236
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
10. |
Shorter Communication: A Note on the Determination and Construction of Minimal Orthogonal Main-Effect Plans |
|
Technometrics,
Volume 34,
Issue 1,
1992,
Page 92-96
Mike Jacroux,
Preview
|
PDF (447KB)
|
|
摘要:
In this article, I derive sufficient conditions for an orthogonal main-effect plan havingkfactors atSilevels,i= 1, …,k, to have a minimal number of observations. These sufficient conditions are then used to show that many of the orthogonal main-effect plans given prcviously in the literature have minimal numbers of observations.
ISSN:0040-1706
DOI:10.1080/00401706.1992.10485237
出版商:Taylor & Francis Group
年代:1992
数据来源: Taylor
|
|