Your comments to our work are very welcome. These files may be
slightly different from the published form because of
formatting issues and error-fixing after the publication.
-
H.-Z. Lin, C.-H. Liu, and and C.-J. Lin.
Exploring space efficiency in a tree-based linear model for
extreme multi-label classification
.
Proceedings of the
Conference on Empirical Methods in Natural Language Processing
(EMNLP), 2024
(supplementary materials included in the paper pdf file; experimental code).
-
S.-W. Chen and and C.-J. Lin.
One-class matrix factorization:
point-wise regression-based or pair-wise ranking-based?
.
Proceedings of the
18th ACM Recommender Systems Conference, 2024.
(supplementary materials included in the paper pdf file; experimental code).
-
Y.-H. Fang, H.-Z. Lin, J.-J. Liu, and C.-J. Lin.
A step-by-step introduction to the implementation of automatic differentiation. Technical report, 2024.
(slides and code).
-
Y.-J. Lin and and C.-J. Lin.
On the thresholding strategy for infrequent labels in multi-label classification
.
Proceedings of the
32nd ACM International Conference on Information and Knowledge Management, 2023.
(supplementary materials included in the paper pdf file; experimental code).
-
Y.-C. Lin, S.-A. Chen, J.-J. Liu, and C.-J. Lin.
Linear classifier: an often-forgotten baseline for text classification.
Proceedings of the 61st Annual Meeting of the
Association of Computational Linguistics (ACL), 2023 (short paper).
Code is available here. (outstanding paper award).
-
Z. Que and C.-J. Lin.
One-class SVM probabilistic outputs.
(experimental code).
To appear in IEEE Transactions on Neural Networks and Learning Systems 2024.
Implementation available in
LIBSVM (after version 3.3).
- Y. Liu, J.-N. Yen, B. Yuan, R. Shi, P. Yan, and C.-J. Lin.
Practical counterfactual policy learning for top-K recommendations.
ACM KDD 2022.
Supplementary materials and experimental code are available
here.
- S.-A. Chen, J.-J. Liu, T.-H. Yang, H.-T. Lin, and C.-J. Lin.
Even the simplest baseline needs careful re-investigation: a case study on XML-CNN.
NAACL 2022.
(supplementary materials included in the paper pdf file; experimental code; data).
- L.-C. Lin, C.-H. Liu, C.-M. Chen, K.-C. Hsu, I.-F. Wu, M.-F. Tsai, and C.-J. Lin.
On the use of unrealistic predictions in hundreds of papers evaluating graph representations.
AAAI 2022.
Supplementary materials and experimental code are available
here.
- J.-N. Yen and C.-J. Lin.
Limited-memory Common-directions Method With Subsampled Newton Directions for Large-scale Linear Classification.
ICDM 2021.
Supplementary materials and experimental code are available
here.
-
B. Yuan, Y.-S. Li, P. Quan and C.-J. Lin.
Efficient optimization methods for extreme similarity learning with nonlinear embeddings.
ACM KDD 2021.
Supplementary materials and experimental code are available
here. Note that netflix data
is not available as we are not allowed to redistribute the set.
-
J.-J. Liu, T.-H. Yang, S.-A. Chen and C.-J. Lin.
Parameter selection: why we should pay more attention to it.
Proceedings of the 59th Annual Meeting of the
Association of Computational Linguistics (ACL), 2021 (short paper).
Supplementary materials and experimental code are available
here.
-
G. Galvan, M. Lapucci, C.-J. Lin and M. Sciandrone.
A two-level decomposition framework exploiting first and second order information for SVM training problems.
Journal of Machine Learning Research, 22(23):1-38, 2021.
-
C.-P. Lee, P.-W. Wang
and C.-J. Lin.
Limited-memory common-directions method for large-scale
optimization: convergence, parallelization, and distributed
optimization.
Mathematical Programming Computation,
14:543-591 (2022).
Code for experiments.
Implementations available in Distributed LIBLINEAR.
-
B. Yuan, Y. Liu, J.-Y. Hsia, Z. Dong, and C.-J. Lin.
Unbiased Ad click prediction for position-aware advertising systems
ACM Recommender Systems, 2020.
Supplementary materials are in the end of the paper file
and experimental code is available.
-
L. Galli and C.-J. Lin.
A study on truncated Newton methods for linear classification
IEEE Transactions on Neural Networks and Learning Systems, 33:2828-2841, 2022.
(supplementary materials and code for experiments.)
Implementation available in
LIBLINEAR (after version 2.40).
-
H.-Y. Chou, P.-Y. Lin, and C.-J. Lin.
Dual coordinate-descent methods for linear one-class SVM and SVDD
SIAM International Conference on Data Mining, 2020.
(Supplementary materials and code for experiments).
Implementation available in
LIBLINEAR (after version 2.40).
-
J.-Y. Hsia and C.-J. Lin.
Parameter selection for linear support vector regression. IEEE Transactions on Neural Networks and Learning Systems, 31:5639-5644 (2020).
(details and supplementary materials/exp code).
Implementation available in
LIBLINEAR (after version 2.30).
-
C.-C. Wang, K.L. Tan, and C.-J. Lin.
Newton methods for convolutional neural networks.
ACM Transactions on Intelligent
Systems and Technology, 11:19:1--19:30, 2020.
(supplementary materials,
code).
-
C.-C. Chiu, P.-Y. Lin, and C.-J. Lin.
Two-variable dual coordinate descent methods for linear SVM with/without the bias term.
SIAM International Conference on Data Mining, 2020.
(Supplementary materials and code for experiments)
-
B. Yuan, J.-Y. Hsia, M.-Y. Yang, H. Zhu, C. Chang, Z. Dong,
and C.-J. Lin.
Improving Ad click prediction by considering non-displayed events.
ACM International Conference on Information and Knowledge Management (CIKM) 2019.
Supplementary materials are in the end of the paper file
and experimental code.
-
B. Yuan, M.-Y. Yang, J.-Y. Hsia, H. Zhu, Z. Liu, Z. Dong,
and C.-J. Lin.
One-class field-aware factorization machines for recommender systems with implicit feedbacks. Technical report, 2019.
Supplementary materials are in the end of the paper file
and experimental code.
-
C.-Y. Hsia, W.-L. Chiang, and C.-J. Lin.
Preconditioned conjugate gradient methods in truncated Newton frameworks for large-scale linear classification
.
Asian Conference on Machine Learning (ACML), 2018 (best paper award).
Implementation available in
LIBLINEAR (after version 2.20).
Supplementary materials
and code for experiments. Proof of the main theorem has been updated after the publication of the paper
-
W.-L. Chiang, Y.-S. Li, C.-P. Lee, and C.-J. Lin.
Limited-memory common-directions method for distributed L1-regularized linear classification
SIAM International Conference on Data Mining, 2018.
Supplementary materials and code for experiments.
Implementations available in Distributed LIBLINEAR.
-
C.-C. Wang, K. L. Tan, C.-T. Chen, Y.-H. Lin, S. S. Keerthi, D. Mahajan, S. Sundararajan, and C.-J. Lin.
Distributed Newton methods for deep neural networks
.
Neural Computation,
30:1673-1724, (2018).
Supplement and code for paper's experiments).
-
Y. Zhuang, Y.-C. Juan, G.-X. Yuan, and C.-J. Lin.
Naive parallelization of coordinate descent methods and an application on multi-core L1-regularized classification
.
ACM International Conference on Information and Knowledge Management (CIKM) 2018 (Supplementary materials,
code for paper's experiments).
-
C.-Y. Hsia, Y. Zhu, and C.-J. Lin.
A study on trust region update rules in Newton methods for large-scale linear classification
.
Asian Conference on Machine Learning (ACML), 2017.
Implementation available in
LIBLINEAR (after version 2.11).
Supplementary materials and experimental code
-
H.-F. Yu, H.-Y. Huang, I. S. Dihillon, and C.-J. Lin.
A unified algorithm for one-class structured matrix factorization with side information
.
AAAI 2017. Supplementary materials are in the end of the paper file
and experimental code.
-
C.-P. Lee, P.-W. Wang, W. Chen, and C.-J. Lin.
Limited-memory common-directions method for
distributed optimization and its application on
empirical risk minimization
.
SIAM International Conference on Data Mining, 2017.
Supplementary materials and experimental code.
Implementations available in Distributed LIBLINEAR.
-
W.-S. Chin, B.-W. Yuan, M.-Y. Yang, and
C.-J. Lin.
An efficient alternating Newton method for learning factorization machines.
ACM Transactions on
Intelligent Systems and Technology,
9:72:1-72:31, 2018.
Software package, supplementary materials, and experimental code
-
W.-S. Chin, B.-W. Yuan, M.-Y. Yang, Y. Zhuang, Y.-C. Juan, and
C.-J. Lin.
LIBMF: A library for parallel matrix factorization in shared-memory systems
Journal
of Machine Learning Research,
17(86):1-5, (2016).
Supplementary materials.
-
Y.-C. Juan, W.-S. Chin, Y. Zhuang, and C.-J. Lin.
Field-aware factorization machines for CTR prediction,
ACM Recommender Systems, 2016.
Due to the change of some data settings, experimental results have been updated
here and are different from those in the published proceedings.
Implementation available in LIBFFM package.
Experimental code.
-
W.-L. Chiang, M.-C. Lee, and C.-J. Lin.
Parallel dual coordinate descent method for large-scale linear classification in multi-core environments,
ACM KDD 2016
(Implementation and supplementary materials available in
Multi-core LIBLINEAR).
-
P.-W. Wang, C.-P. Lee, and C.-J. Lin.
The common-directions method for regularized empirical risk minimization
.
Journal
of Machine Learning Research,
20(58):1-49, 2019.
(Supplementary materials and experimental code).
-
H.-F. Yu, M. Bilenko, and C.-J. Lin.
Selection of negative samples for one-class matrix factorization.
SIAM International Conference on Data Mining, 2017.
(supplementary materials included in the paper pdf file; experimental code).
-
H.-Y. Huang and C.-J. Lin.
Linear and kernel classification: when to use which?,
SIAM International Conference on Data Mining, 2016.
(supplementary materials and experimental code).
-
M.-C. Lee, W.-L. Chiang, and C.-J. Lin.
Fast matrix-vector multiplications for large-scale logistic regression on shared-memory systems,
ICDM 2015.
(Supplementary materials,
Implementation available in
Multi-core LIBLINEAR).
-
B.-Y. Chu, C.-H. Ho, C.-H. Tsai, C.-Y. Lin, and C.-J. Lin.
Warm start for parameter selection of linear classifiers,
ACM KDD 2015.
(Implementation available in
LIBLINEAR; see details and supplementary materials).
-
W.-S. Chin, Y. Zhuang, Y.-C. Juan, and C.-J. Lin.
A learning-rate schedule for stochastic
gradient methods to matrix factorization, PAKDD, 2015.
-
Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin. Distributed Newton method for regularized logistic regression, PAKDD 2015.
Implementations available in Distributed LIBLINEAR.
-
C.-C. Wang, C.-H. Huang, and
C.-J. Lin.
Subsampled Hessian Newton methods for supervised learning.
Neural Computation, 27(2015), 1766-1795.
(supplementary materials,
code for experiments
)
-
C.-Y. Lin, C.-H. Tsai, C.-P. Lee, and
C.-J. Lin.
Large-scale logistic regression and linear support vector machines using Spark.
IEEE International Conference on Big Data, 2014.
(see Distributed LIBLINEAR
and supplementary materials)
-
C.-H. Tsai, C.-Y. Lin, and
C.-J. Lin.
Incremental and decremental training for linear classification
ACM KDD 2014.
(see Extension of LIBLINEAR,
supplementary materials, and experiment code).
-
T.-M. Kuo, C.-P. Lee and
C.-J. Lin.
Large-scale Kernel RankSVM.
SIAM International Conference on Data Mining, 2014.
(supplementary materials,
code
for experiments in the paper).
-
W.-S. Chin, Y. Zhuang, Y.-C. Juan, and C.-J. Lin.
A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems.
ACM Transactions on Intelligent
Systems and Technology, 6:2:1--2:24, 2015.
(Implementation available in
LIBMF, code for experiments in the paper).
A preliminary version appeared at
Proceedings of the ACM Recommender Systems, 2013
and received
best paper award (talk slides).
-
W.-C. Chang, C.-P. Lee, and C.-J. Lin.
A revisit to support vector data description (SVDD).
Technical report, 2013.
-
P.-W. Wang and
C.-J. Lin.
Iteration Complexity of Feasible Descent Methods for Convex Optimization.
Journal
of Machine Learning Research,
15(2014), 1523-1548.
-
C.-P. Lee and
C.-J. Lin.
Large-scale Linear RankSVM
.
Neural Computation, 26(2014), 781-817.
(supplementary materials,
LIBLINEAR extension for ranking code,
code
for experiments in the paper).
-
H.-F. Yu, C.-H. Ho, Y.-C. Juan, and
C.-J. Lin.
LibShortText: a library for short-text classification and analysis
.
Technical report, 2013.
LibShortText software.
-
C.-P. Lee, and
C.-J. Lin.
A Study on L2-Loss (Squared Hinge-Loss) Multi-Class SVM.
Neural Computation, 25(2013), 1302-1323. (code).
-
C.-H. Ho, and
C.-J. Lin.
Large-scale Linear Support Vector Regression.
Journal
of Machine Learning Research,
13(2012), 3323-3348.
(Implementation available in
LIBLINEAR, code for experiments in the paper)
-
G.-X. Yuan,
C.-H. Ho, and
C.-J. Lin.
Recent Advances of Large-scale Linear Classification.
Proceedings of the IEEE,
100(2012), 2584-2603.
This is a survey paper and we plan to keep updating it.
Your comments are very welcome.
-
C.-C. Chang and
C.-J. Lin.
LIBSVM
: a library for support vector machines.
ACM Transactions on Intelligent
Systems and Technology, 2:27:1--27:27, 2011. This
LIBSVM implementation document was created in 2001
and since then has been actively maintained/updated.
pdf,
ps.gz,
ACM Digital lib,
LIBSVM page
-
G.-X. Yuan,
C.-H. Ho, and
C.-J. Lin.
An Improved GLMNET for L1-regularized Logistic Regression and Support
Vector Machines.
Journal
of Machine Learning Research,
13(2012), 1999-2030.
A short version appears at ACM KDD 2011.
(Implementation available in
LIBLINEAR,
supplementary materials, code for the paper).
-
C.-H. Ho, M.-H. Tsai, and
C.-J. Lin.
Active Learning and Experimental Design with SVMs
.
JMLR Workshop and Conference Proceedings: Workshop on Active Learning and Experimental Design
16(2011), 71-84. Code.
-
H.-F. Yu,
C.-J. Hsieh,
K.-W. Chang,
and
C.-J. Lin,
Large linear classification when data cannot fit in memory.
ACM Transactions on Knowledge Discovery from Data, 5:23:1--23:23, 2012.
A preliminary version appeared at ACM KDD 2010 and received
best research paper award.
Code. Slides. Discussion and FAQ. Video,
ACM Digital lib
-
H.-F. Yu,
F.-L. Huang, and
C.-J. Lin.
Dual coordinate descent methods for logistic regression and
maximum entropy models
.
Machine Learning, 85(2011), 41-75.
(code)
-
R. C. Weng
and C.-J. Lin.
A Bayesian approximation method for online ranking.
Journal
of Machine Learning Research,
12(2011), 267-300.
Code.
-
G.-X. Yuan,
K.-W. Chang, C.-J. Hsieh, and
C.-J. Lin.
A comparison of optimization methods
and software
for large-scale L1-regularized linear classification.
(supplementary materials, code).
Journal
of Machine Learning Research, 11(2010), 3183-3234.
-
Y.-W. Chang, C.-J. Hsieh,
K.-W. Chang, M. Ringgaard, and
C.-J. Lin.
Training and Testing Low-degree Polynomial Data Mappings via Linear SVM.
Journal
of Machine Learning Research, 11(2010), 1471-1490.
(Extension of LIBLINEAR,
code for experiments in the paper)
-
F.-L. Huang, C.-J. Hsieh,
K.-W. Chang, and
C.-J. Lin.
Iterative scaling and coordinate descent methods for maximum entropy models,
Journal
of Machine Learning Research
11(2010), 815-848.
A brief version appears
at ACL 2009 (short paper).
(code for
experiments in the paper)
-
R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and
C.-J. Lin.
LIBLINEAR: A library for large linear classification
.
Journal
of Machine Learning Research
9(2008), 1871-1874.
Note that we include
some implementation details in the appendices
of this paper. (LIBLINEAR page)
-
Y.-W. Chang and
C.-J. Lin.
Feature Ranking Using Linear SVM
.
JMLR Workshop and Conference Proceedings: Causation and Prediction Challenge (WCCI 2008)
3(2008), 53-64.
Code.
-
W.-Y. Chen, Y. Song, H. Bai,
C.-J. Lin,
and E. Y. Chang.
Parallel Spectral Clustering in Distributed Systems.
IEEE Transactions on Pattern Analysis and Machine Intelligence, (33)2011, 568-586.
A short version appears at ECML/PKDD 2008.
Code
-
S. S. Keerthi,
S. Sundararajan.
K.-W. Chang,
C.-J. Hsieh,
and
C.-J. Lin,
A sequential dual method for large scale multi-class linear SVMs
.
ACM KDD 2008.
-
C.-J. Hsieh
K.-W. Chang,
C.-J. Lin,
S. S. Keerthi,
and
S. Sundararajan.
A Dual Coordinate Descent Method For Large-Scale Linear
SVM.
ICML 2008.
(Implementation available in
LIBLINEAR,
code for experiments in the paper).
-
K.-W. Chang,
C.-J. Hsieh, and
C.-J. Lin.
Coordinate Descent Method for Large-scale L2-loss Linear SVM.
Journal
of Machine Learning Research
9(2008), 1369-1398.
(code)
-
R.-E. Fan and C.-J. Lin.
A Study on Threshold Selection for Multi-label Classification
, 2007.
-
C.-J. Lin,
R. C. Weng,
and
S. S. Keerthi.
Trust region Newton method for large-scale logistic
regression.
Journal
of Machine Learning Research
9(2008), 627-650.
A short version appears
in ICML 2007.
Software available at
liblinear.
-
L. Bottou and
C.-J. Lin.
Support Vector Machine Solvers.
In Large Scale Kernel Machines, L. Bottou, O. Chapelle, D. DeCoste, and J. Weston editors, 1-28, MIT Press, Cambridge, MA., 2007.
-
T.-K. Huang,
C.-J. Lin,
and
R. C. Weng.
Ranking Individuals by group comparisons.
Journal
of Machine Learning Research,
9(2008), 2187-2216.
A short version appears in ICML 2006.
Download data used in this work.
-
C.-J. Lin.
On the Convergence of
Multiplicative Update Algorithms for
Non-negative Matrix Factorization.
IEEE Transactions on Neural Networks, 18(2007), 1589-1596.
-
C.-J. Lin.
Projected
gradient methods for non-negative matrix factorization.
Neural Computation, 19(2007), 2756-2779.
-
R.-E. Fan, P.-H. Chen, and C.-J. Lin.
Working Set Selection Using Second Order Information for Training SVM.
Journal
of Machine Learning Research, 6(2005), 1889-1918.
-
P.-H. Chen, R.-E. Fan, and C.-J. Lin.
A Study on SMO-type Decomposition Methods for Support Vector Machines
. IEEE Transactions on Neural Networks, 17(2006), 893-908.
-
Y.-W. Chen and C.-J. Lin,
Combining SVMs with various feature selection strategies.
in the book
"Feature extraction, foundations and applications." Springer, 2006.
-
T.-K. Huang,
R. C. Weng,
and
C.-J. Lin.
Generalized Bradley-Terry Models and Multi-class Probability Estimates.
Journal
of Machine Learning Research, 7(2006), 85-115.
A (very) short version of this paper appears in
NIPS 2004.
-
T.-F. Wu,
C.-J. Lin, and
R. C. Weng.
Probability Estimates for Multi-class Classification by Pairwise Coupling.
Journal
of Machine Learning Research, 5(2004), 975-1005.
A short version appears in
NIPS 2003.
-
C.-W. Hsu, C.-C. Chang,
C.-J. Lin.
A practical guide to support vector classification
.
Technical report, Department of Computer
Science, National Taiwan University.
July, 2003.
-
M.-W. Chang and C.-J. Lin.
Leave-one-out Bounds for Support Vector
Regression Model Selection.
Neural Computation, 17(2005), 1188-1222.
-
H.-T. Lin,
C.-J. Lin, and
R. C. Weng.
A note on Platt's probabilistic outputs for support vector machines.
Machine Learning, 68(2007), 267-276.
-
P.-H. Chen,
C.-J. Lin,
and
B. Schölkopf.
A tutorial on nu-support vector machines.
Applied Stochastic Models in Business and Industry
, 21(2005), 111-136.
-
H.-T. Lin
and
C.-J. Lin.
A study on sigmoid kernels for SVM and the training
of non-PSD Kernels by
SMO-type methods.
March 2003.
-
W.-C. Kao,
K.-M. Chung,
T. Sun,
and
C.-J. Lin.
Decomposition Methods for Linear Support Vector Machines.
Neural Computation,
16(2004), 1689-1704.
-
B.-J. Chen, M.-W. Chang, and
C.-J. Lin.
Load Forecasting Using Support Vector Machines:
A Study on EUNITE Competition 2001.
IEEE Transactions on Power Systems.
19(2004), 1821-1830.
-
K.-M. Chung, W.-C. Kao,
C.-L. Sun, L.-L. Wang,
and
C.-J. Lin.
Radius Margin Bounds for Support Vector Machines with the RBF Kernel
.
Neural Computation,
15(2003), 2643-2681.
-
S. S. Keerthi
and
C.-J. Lin.
Asymptotic behaviors of support vector machines with
Gaussian kernel
.
Neural Computation, 15(2003), 1667-1689.
-
M.-W. Chang, C.-J. Lin, and
R. C. Weng.
Analysis of nonstationary time series using
support vector machines
April 2002.
-
K.-M. Lin and C.-J. Lin
A study on reduced support vector machines.
IEEE Transactions on Neural Networks, 14(2003), 1449-1559.
-
M.-W. Chang, C.-J. Lin, and
R. C. Weng.
Analysis of switching dynamics with
competing support vector machines
Proceedings of IJCNN, May 2002.
Extended version is
here.
IEEE Transactions on Neural Networks, 15(2004), 720-727.
-
M.-W. Chang, B.-J. Chen and C.-J. Lin.
EUNITE Network Competition:
Electricity Load Forecasting
, November 2001.
Winner of
EUNITE
world wide
competition
on electricity load prediction.
-
C.-J. Lin.
Linear convergence of a
decomposition method for support
vector machines
, November 2001.
-
C.-J. Lin.
Asymptotic convergence of an SMO algorithm without any assumptions
.
IEEE Transactions on Neural Networks 13(2002), 248-250.
-
C.-C. Chang and C.-J. Lin.
IJCNN 2001 Challenge: Generalization Ability and
Text Decoding
,
Proceedings of IJCNN, July 2001. Winner of
IJCNN Challenge.
-
C.-J. Lin.
A Formal Analysis of Stopping Criteria of
Decomposition Methods for Support
Vector Machines
,
IEEE Transactions on Neural Networks, 13(2002), 1045-1052.
-
C.-W. Hsu and C.-J. Lin.
A comparison of methods
for multi-class support vector machines
,
IEEE Transactions on Neural Networks, 13(2002), 415-425.
-
C.-C. Chang and C.-J. Lin.
Training
nu-support vector regression:
theory and algorithms
,
Neural Computation, 14(2002), 1959-1977.
Implementation available in
libsvm
.
-
S.-P. Liao, H.-T. Lin, and
C.-J. Lin.
A note on
the decomposition methods
for support vector regression
.
Neural Computation, 14(2002), 1267-1281.
-
J.-H. Lee and
C.-J. Lin.
Automatic model selection for support vector machines
, November 2000.
Implementation available in
looms
.
-
C.-J. Lin.
On the convergence
of the decomposition method for
support vector machines
,
IEEE Transactions on Neural Networks 12(2001), 1288-1298.
-
C.-W. Hsu and C.-J. Lin.
A simple decomposition method for
support vector machines
,
Machine Learning 46(2002), 291-314.
Implementation available in
bsvm
.
-
C.-C. Chang and C.-J. Lin.
Training
nu-Support Vector Classifiers:
Theory and Algorithms
,
Neural
Computation 13(9), 2001, 2119-2147.
Implementation available in
libsvm
.
-
C.-J. Lin.
Formulations of support vector machines: a note from an optimization point of view
.
Neural Computation 13(2) 2001, 307-317.
- C.-C. Chang, C.-W. Hsu, and
C.-J. Lin.
The analysis of decomposition methods for support vector machines.
in
Proceeding of the
Workshop on Support Vector Machines,
Sixteenth International Joint Conference on
Artificial Intelligence
(IJCAI 99).
Extended version appears in
IEEE Transactions on Neural Networks, 11(2000), 1003-1008.
- C.-J. Lin, and R.
Saigal. An incomplete Cholesky factorization for
dense matrices.
BIT,
40(2000), 536-558.
- C.-J. Lin, and J. J. More'
.
Newton's method for large bound-constrained optimization problems.
SIAM Journal on
Optimization, 9(1999), 1100-1127.
(code)
-
S.-Y. Wu,
S.-C. Fang
and C.-J. Lin,
Solving the General Capacity
Problem.
Annals of Operations
Research.
103(2001), 193-211.
-
S.-C. Fang, C.-J. Lin, and S.-Y. Wu
Relaxations of the cutting plane method for
quadratic semi-infinite programming.
Journal of Computational
and Applied Mathematics
, 129(2001), 89-104.
- C.-J. Lin, and J. J. More'
. Incomplete
Cholesky Factorizations with Limited Memory.
SIAM Journal on
Scientific Computing, 21(1999), 24-45.
- C.-J. Lin,
Preconditioning Dense Linear Systems from
Large-Scale Semidefinite Programming Problems.
In Proceeding of the
Fifth Copper Mountain
Conference on Iterative Methods.
1998.
Second prize of the student paper competition.
-
S.-C. Fang, S.-Y. Wu and C.-J. Lin,
A relaxed cutting plane
method for solving linear semi-infinite
programming problems.
Journal of
Optimization Theory and Applications, 99(1998), 759--779.
-
C.-J. Lin, S.-C. Fang , and S.-Y. Wu. An Unconstrained
Convex Programming Approach for Solving Linear Semi-Infinite Programming
Problems , SIAM Journal on Optimization 8, 1998, 443--456.
Papers before 1998 are not listed.
cjlin@csie.ntu.edu.tw