|
1. |
Solving large-scale linear programs by interior-point methods under the Matlab*Environment† |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 1-31
Yin Zhang,
Preview
|
PDF (1072KB)
|
|
摘要:
In this paper, we describe our implementation of a primal-dual infeasible-interior-point algorithm for large-scale linear programming under the MATLAB environment. The resulting software is called LIPSOL — Linear-programming Interior-Point SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparse-matrix functions and external interface facilities, and of existing Fortran sparse Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving large-scale problems. In addition, we discuss in detail a technique for overcoming numerical instability in Cholesky factorization at the end-stage of iterations in interior-point algorithms.
ISSN:1055-6788
DOI:10.1080/10556789808805699
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
2. |
Computing a sparse Jacobian matrix by rows and columns |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 33-48
A. K. M. Shahadat Hossain,
Trond Steihaug,
Preview
|
PDF (600KB)
|
|
摘要:
Efficient estimation of large sparse Jacobian matrices has been studied extensively in the last couple of years. It has been observed that the estimation of Jacobian matrix can be posed as a graph coloring problem. Elements of the matrix are estimated by taking divided difference in several directions corresponding to a group of structurally independent columns. Another possibility is to obtain the nonzero elements by means of the so calledAutomatic differentiation, which gives the estimates free of truncation error that one encounters in a divided difference scheme. In this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graph-theoretic characterization of the problem is given.
ISSN:1055-6788
DOI:10.1080/10556789808805700
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
3. |
Regularization tools for training large feed-forward neural networks using automatic differentiation* |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 49-69
Jerry Erikssont,
Mårten Gulliksson,
Per Lindström,
Per-åke Wedin,
Preview
|
PDF (769KB)
|
|
摘要:
We describe regularization tools for training large-scale artificial feed-forward neural networks. We propose algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squares problems. For large-scale problems, methods using new special purpose automatic differentiation are used in a conjugate gradient method for computing a truncated Gauss—Newton search direction. The algorithms developed utilize the structure of the problem in different ways and perform much better than a Polak-Ribiere based method. All algorithms are tested using benchmark problems and guidelines by Lutz Prechelt in the Probenl package. All software is written in Matlab and gathered in a toolbox.
ISSN:1055-6788
DOI:10.1080/10556789808805701
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
4. |
A new nonlinear ABS-type algorithm and its efficiency analysis* |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 71-85
N. Deng,
Z. Chen,
Preview
|
PDF (447KB)
|
|
摘要:
As a continuation work following [4] and [5], a new ABS-type algorithm for a nonlinear system of equations is proposed. A major iteration of this algorithm requiresncomponent evaluations and only one gradient evaluation. We prove that the algorithm is superlinearly convergent withR-order at least τn, where τnis the unique positive root ofτn−τn−1−1=0. It is shown that the new algorithm is usually more efficient than the methods of Newton, Brown and Brent, and the ABS-type algorithms in [1], [4] and [5], in the sense of some standard efficiency measure.
ISSN:1055-6788
DOI:10.1080/10556789808805702
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
5. |
A multiplier adjustment technique for the capacitated concentrator location problem |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 87-102
M. Celani,
R. Cerulli,
M. Gaudioso,
Ya.D. Sergeyev,
Preview
|
PDF (469KB)
|
|
摘要:
We describe a new dual descent method for a pure 0— location problem known as the capacitated concentrator location problem.
ISSN:1055-6788
DOI:10.1080/10556789808805703
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
6. |
Book Review |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page 103-105
Michael Doumpos,
Preview
|
PDF (109KB)
|
|
ISSN:1055-6788
DOI:10.1080/10556789808805704
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
7. |
Editorial board |
|
Optimization Methods and Software,
Volume 10,
Issue 1,
1998,
Page -
Preview
|
PDF (119KB)
|
|
ISSN:1055-6788
DOI:10.1080/10556789808805698
出版商:Gordon and Breach Science Publishers
年代:1998
数据来源: Taylor
|
|