|
1. |
Fitting the additive model by recursion on dimension |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 689-701
Joan G. Stamswalis,
Thomas A. Severini,
Preview
|
PDF (332KB)
|
|
摘要:
We consider estimation for the homoscedastic additive model for multiple regression. A recursion is proposed in Opsomer (1999), and independently by the authors, for obtaining the estimators that solve the normal equations given by Hastie and Tibshirani (1990). The recursion can be exploited to obtain the asymptotic bias and variance expressions of the estimators for anyp> 2 (Opsomer 1999) using repeated application of Opsomer and Ruppert (1997). Opsomer and Ruppert (1997) provide asymptotic bias and variance for the estimators whenp= 2. Opsomer (1999) also uses the recursion to provide sufficient conditions for convergence of the backfitting algorithm to a unique solution of the normal equations. However, since explicit expressions for the solution to the normal equations are not given, he states, “The lemma does not provide a practical way of evaluating the existence and uniqueness of the backfitting estimators … ”. In this paper, explicit expressions for the estimators are derived. The explicit solution requires inverses ofn×nmatrices to solve thenp×npsystem of normal equations. These matrix inverses are feasible to implement for moderate sample sizes and can be used in place of the backfitting algorithm.
ISSN:0361-0918
DOI:10.1080/03610910008813635
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
2. |
A hybrid method for improved critical points for multiple comparisons |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 703-722
Melinda McCann,
Don Edwards,
Preview
|
PDF (759KB)
|
|
摘要:
We present new methods for computing conservative critical points for simultaneous confidence intervals and bounds in the normal-theoretic fixed effects general linear model. The methods use a representation of the joint error probability attributed to Uusipaikka (unpublished manuscript, 1984), and popularized by Naiman (Annals of Statistics,14, 1986, 896–906). This representation expresses the joint error probability of the confidence bounds as an expectation of surface areas of unions of disks with random radii on the sphere. The representation can be used to compute critical points by most well-known conservative methods; these can be regarded as using different upper bounds for the surface area of the union of disks. Our new hybrid methods compute the critical points using the minimum of several of these existing upper bounds, as well as some new bounds, and so improve upon all such methods included in the minimum expression. The new methods are a substantial improvement over the Hunter-Worsley and Šidák methods. Over 216 test cases the sample size savings of the new methods to the Hunter-Worsley method ranged from a worst-case loss of 3% to savings as large as 65%. Compared to the Šidák method, sample size savings ranged from a worst-case loss of 2% to savings as large as 95%. Savings of 5–10% were typical against both existing methods. One of the new methods, the “capped tubes” critical point, can be computed relatively cheaply without a large sacrifice in relative efficiency.
ISSN:0361-0918
DOI:10.1080/03610910008813636
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
3. |
Estimating the number of factors to include in a high-dimensional multivariate bilinear model |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 723-746
Eun Sug Park,
Ronald C. Henry,
Clifford H. Spiegelman,
Preview
|
PDF (823KB)
|
|
摘要:
We present two new statistics for estimating the number of factors underlying in a multivariate system. One of the two new methods, the original NUMFACT, has been used in high profile environmental studies. The two new methods are first explained from a geometrical viewpoint. We then present an algebraic development and asymptotic cutoff points. Next we present a simulation study that shows that for skewed data the new methods are typically superior to traditional methods and for normally distributed data the new methods are competitive to the best of the traditional methods. We finally show how the methods compare by using two environmental data sets.
ISSN:0361-0918
DOI:10.1080/03610910008813637
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
4. |
The spokane heart study: weibull regression and coronary artery disease |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 747-761
Nairanjana Dasgupta,
Peijin Xie,
Monte O. Cheney,
Lyle Broemeling,
C. Harold Mielke,
Preview
|
PDF (464KB)
|
|
摘要:
Coronary artery calcium is a marker of coronary artery disease and measures the progression of atherosclerosis. It is measured by electron beam computed tomography, and the measured amount of coronary artery calcium is highly skewed to the right and left censored. The distribution of coronary artery calcium appears to be Weibull. We propose a Weibull regression model and we analyze the data using these techniques. Our analysis is based on data from the Spokane Heart Study, which is a cohort of about a thousand subjects that are assessed every two years for coronary artery calcium and risk factors of coronary artery disease. The major focus of the heart study is to determine the natural history of atherosclerosis in its early phase, and we analyze the data as a cross-sectional study with 859 subjects. We would also like to highlight the use of Weibull regression techniques in situations like this, where we have extreme right skewed data. Our main emphasis will be on examining the effect of the traditional risk factors of age, gender, lipid profile (cholesterol and HDL), patient history of lipid abnormality, hypertension, and smoking, and other family history risks on coronary artery calcium. We found that the most important factors influencing the disease were age, sex, and patient history of smoking and lipid abnormality.
ISSN:0361-0918
DOI:10.1080/03610910008813638
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
5. |
A comparison of two approaches for power and sample size calculations in logistic regression models |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 763-791
Gwowen Shieh,
Preview
|
PDF (792KB)
|
|
摘要:
Whittemore (1981) proposed an approach for calculating the sample size needed to test hypotheses with specified significance and power against a given alternative for logistic regression with small response probability. Based on the distribution of covariate, which could be either discrete or continuous, this approach first provides a simple closed-form approximation to the asymptotic covariance matrix of the maximum likelihood estimates, and then uses it to calculate the sample size needed to test a hypothesis about the parameter. Self et al. (1992) described a general approach for power and sample size calculations within the framework of generalized linear models, which include logistic regression as a special case. Their approach is based on an approximation to the distribution of the likelihood ratio statistic. Unlike the Whittemore approach, their approach is not limited to situations of small response probability. However, it is restricted to models with a finite number of covariate configurations. This study compares these two approaches to see how accurate they would be for the calculations of power and sample size in logistic regression models with various response probabilities and covariate distributions. The results indicate that the Whittemore approach has a slight advantage in achieving the nominal power only for one case with small response probability. It is outperformed for all other cases with larger response probabilities. In general, the approach proposed in Self et al. (1992) is recommended for all values of the response probability. However, its extension for logistic regression models with an infinite number of covariate configurations involves an arbitrary decision for categorization and leads to a discrete approximation. As shown in this paper, the examined discrete approximations appear to be sufficiently accurate for practical purpose.
ISSN:0361-0918
DOI:10.1080/03610910008813639
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
6. |
Finding Bounds Applied to Serial Dilution Experiments |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 793-799
Robert J. Blodgett,
Preview
|
PDF (142KB)
|
|
摘要:
A method of finding bounds for a parameter in a sum of similar functions is introduced. Such bounds can help an iteration procedure to estimate the parameter. This method applies to the equations for finding maximum likelihood estimates of concentration for a serial dilution experiment. For serial dilution experiments these bounds are calculated as an example of the method.
ISSN:0361-0918
DOI:10.1080/03610910008813640
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
7. |
A markov regression model for the analysis of the postpartum lactational amenorrhea |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 801-828
Y. Le Strat,
G. Thomas,
J.-C. Thalabard,
Preview
|
PDF (868KB)
|
|
摘要:
Failure time data represent a particular case of binary longitudinal data. The corresponding analysis of the effect of explanatory covariates repeatedly collected over time on the failure rate has been largely facilitated by the Cox semi-parametric regression model. However, neither the interpretation of the estimated parameters associated with time-dependent covariates is straight-forward, nor does this model fully account for the dynamics of the effect of a covariate over time. Markovian regression models appear as complementary tools to address these specific issues from the predictive point of view. We illustrate these aspects using data from the WHO multicenter study, which was designed to analyze the relation between the duration of postpartum lactational amenorrhea and the breastfeeding pattern. One of the main advantage of this approach applied to the field of reproductive epidemiology was to provide a flexible tool, easily and directly understood by clinicians and fieldworkers, for simulating situations, which were still unobserved, and to predict their effects on the duration of amenorrhea.
ISSN:0361-0918
DOI:10.1080/03610910008813641
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
8. |
A continuous approximation for evaluating reliability of complex systems under stress-strength model. |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 829-844
Dilip Roy,
Tanmoy Dasgupta,
Preview
|
PDF (424KB)
|
|
摘要:
Evaluation of system reliability for complex systems based on Taylor's approximation becomes increasingly intractable. Taguchi's concept of random experimentation has been exploited by English et al (1996) for discretization of complex systems and determination of reliability values. We indicate a few demerits of discretization method and propose to retain the continuous character of the original problem by evaluating system reliability using a range approximation method. Our proposed method works better than discretization approach in all the three engineering problems considered for the purpose of demonstration.
ISSN:0361-0918
DOI:10.1080/03610910008813642
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
9. |
Economic statistical design forand S2control charts: A markov chain approach |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 845-873
Su-Fen Yang,
M.A. Rahim,
Preview
|
PDF (741KB)
|
|
摘要:
Over-adjustment to a process may result in shifts in process mean, process variance, or both, ultimately affecting the quality of products. A statistically constrained model is developed for the joint economic statistical design ofand S2control charts to control both process mean and variance. The objective is to determine the design parameters of the control charts, which minimize the total quality control cost. A Markov chain approach is used to derive the model. Application of the model is demonstrated through a numerical example.
ISSN:0361-0918
DOI:10.1080/03610910008813643
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
10. |
Tests for mean equality that do not require homogeneity of variances: do they really Work? |
|
Communications in Statistics - Simulation and Computation,
Volume 29,
Issue 3,
2000,
Page 875-895
H. J. Keselman,
Rand R. Wilcox,
Jason Taylor,
Rhonda K. Kowalchuk,
Preview
|
PDF (644KB)
|
|
摘要:
Tests for mean equality proposed by Weerahandi (1995) and Chen and Chen (1998), tests that do not require equality of population variances, were examined when data were not only heterogeneous but, as well, nonnormal in unbalanced completely randomized designs. Furthermore, these tests were compared to a test examined by Lix and Keselman (1998), a test that uses a heteroscedastic statistic (i.e., Welch, 1951) with robust estimators (20% trimmed means and Winsorized variances). Our findings confirmed previously published data that the tests are indeed robust to variance heterogeneity when the data are obtained from normal populations. However, the Weerahandi (1995) and Chen and Chen (1998) tests were not found to be robust when data were obtained from nonnormal populations. Indeed, rates of Type I error were typically in excess of 10% and, at times, exceeded 50%. On the other hand, the statistic presented by Lix and Keselman (1998) was generally robust to variance heterogeneity and nonnormality.
ISSN:0361-0918
DOI:10.1080/03610910008813644
出版商:Marcel Dekker, Inc.
年代:2000
数据来源: Taylor
|
|