This page provides the source codes for the papers related to LIBLINEAR.
L. Galli and C.-J. Lin. A study on truncated Newton methods for linear classification IEEE Transactions on Neural Networks and Learning Systems, 2021 and supplementary materials can be found at this page.
C.-Y. Hsia, W.-L. Chiang, and C.-J. Lin. Preconditioned Conjugate Gradient Methods in Truncated Newton Frameworks for Large-scale Linear Classification . Asian Conference on Machine Learning (ACML), 2018 (best paper award) and supplementary materials can be found at this page
C.-Y. Hsia, Y. Zhu, and C.-J. Lin. A study on trust region update rules in Newton methods for large-scale linear classification . Asian Conference on Machine Learning (ACML), 2017,
and supplementary materials can be found in this page
B.-Y. Chu, C.-H. Ho, C.-H. Tsai, C.-Y. Lin, and C.-J. Lin. Warm Start for Parameter Selection of Linear Classifiers, ACM KDD 2015,
can be found in this page
C.-P. Lee and C.-J. Lin. Large-scale Linear RankSVM. Technical report, 2013.
can be found in this tar.gz file.
Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "Large-scale linear rankSVM" at LIBSVM Tools.
can be found in this zip file.
H.-F. Yu, C.-J. Hsieh, K.-W. Chang, and C.-J. Lin, Large linear classification when data cannot fit in memory. ACM KDD 2010 (Best research paper award). Extended version appeared in ACM Transactions on Knowledge Discovery from Data, 5:23:1--23:23, 2012.
has been implemented as an extension of LIBLINEAR. It aims to handle data larger than your memory capacity. It can be found in LIBSVM Tools.
To repeat experiments in our paper, check this tgz file. Don't use it unlesse you want to regenerate figures. For you own experiments, you should use the LIBLINEAR extension at LIBSVM tools.
Hsiang-Fu Yu, Fang-Lan Huang, and Chih-Jen Lin. Dual Coordinate Descent Methods for Logistic Regression and Maximum Entropy Models . Machine Learning, 85(2011), 41-75.
can be found in this zip file.
Guo-Xun Yuan, Kai-Wei Chang, Cho-Jui Hsieh, and Chih-Jen Lin. A Comparison of Optimization Methods for Large-scale L1-regularized Linear Classification. JMLR 2010.
Programs for generating experimental results can be found in this zip file.
Guo-Xun Yuan, Chia-Hua Ho, and Chih-Jen Lin. An Improved GLMNET for L1-regularized Logistic Regression and Support Vector Machines. JMLR, 2012
Programs for generating experimental results can be found in this zip file.
You can directly use LIBLINEAR for efficient L1-regularized classification. Use code here only if you are interested in redoing our experiments. The running time is long because we run each solver to accurately solve optimization problems.
Yin-Wen Chang, Cho-Jui Hsieh, Kai-Wei Chang, Michael Ringgaard and Chih-Jen Lin. Low-Degree Polynomial Mapping of Data for SVM, JMLR 2010,
can be found in this zip file.
Use files here only if you are interested in redoing our experiments. To apply the method for your applications, all you need is a LIBLINEAR extension. Check "fast training/testing of degree-2 polynomial mappings of data" at LIBSVM Tools.
Fang-Lan Huang, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models, JMLR 2010,
can be found in this zip file.
C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Sundararajan, and S. Sathiya Keerthi. A Dual Coordinate Descent Method for Large-scale Linear SVM, ICML 2008,
can be found in this zip file.
K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, Coordinate Descent Method for Large-scale L2-loss Linear SVM , JMLR 2008,
can be found in this zip file.
C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region Newton method for large-scale logistic regression, JMLR 2008,
can be found in this zip file.
We include LBFGS and SVMlin (a modified version) for experiments. Please check their respective COPYRIGHT notices.