Author image not provided
 Kaiwei Chang

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article82.68
Citation Count1,571
Publication count19
Publication years2008-2015
Available for download13
Average downloads per article677.38
Downloads (cumulative)8,806
Downloads (12 Months)566
Downloads (6 Weeks)74
SEARCH
ROLE
Arrow RightAuthor only


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


19 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 19 of 19
Sort by:

1
January 2015 AAAI'15: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Training a structured prediction model involves performing several loss-augmented inference steps. Over the lifetime of the training, many of these inference problems, although different, share the same solution. We propose AI-DCD, an Amortized Inference framework for Dual Coordinate Descent method, an approximate learning algorithm, that accelerates the training process by ...

2
June 2014 ICML'14: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32
Publisher: JMLR.org
Bibliometrics:
Citation Count: 0

This paper presents a latent variable structured prediction model for discriminative supervised clustering of items called the Latent Left-linking Model (L 3 M). We present an online clustering algorithm for L 3 M based on a feature-based item similarity function. We provide a learning framework for estimating the similarity function ...

3
September 2013 ECMLPKDD'13: Proceedings of the 2013th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

Many problems in natural language processing and computer vision can be framed as structured prediction problems. Structural support vector machines (SVM) is a popular approach for training structured predictors, where learning is framed as an optimization problem. Most structural SVM solvers alternate between a model update phase and an inference ...

4
September 2013 ECML PKDD 2013: Proceedings, Part II, of the European Conference on Machine Learning and Knowledge Discovery in Databases - Volume 8189
Publisher: Springer-Verlag New York, Inc.
Bibliometrics:
Citation Count: 0

Many problems in natural language processing and computer vision can be framed as structured prediction problems. Structural support vector machines SVM is a popular approach for training structured predictors, where learning is framed as an optimization problem. Most structural SVM solvers alternate between a model update phase and an inference ...

5
September 2013 ECMLPKDD'13: Proceedings of the 2013th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part III
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

Semi-supervised learning has been widely studied in the literature. However, most previous works assume that the output structure is simple enough to allow the direct use of tractable inference/learning algorithms (e.g., binary label or linear chain). Therefore, these methods cannot be applied to problems with complex structure. In this paper, ...

6
July 2012 CoNLL '12: Joint Conference on EMNLP and CoNLL - Shared Task
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 2
Downloads (6 Weeks): 3,   Downloads (12 Months): 10,   Downloads (Overall): 71

Full text available: PDFPDF
The CoNLL-2012 shared task is an extension of the last year's coreference task. We participated in the closed track of the shared tasks in both years. In this paper, we present the improvements of Illinois-Coref system from last year. We focus on improving mention detection and pronoun coreference resolution, and ...

7 published by ACM
February 2012 ACM Transactions on Knowledge Discovery from Data (TKDD): Volume 5 Issue 4, February 2012
Publisher: ACM
Bibliometrics:
Citation Count: 13
Downloads (6 Weeks): 3,   Downloads (12 Months): 38,   Downloads (Overall): 1,014

Full text available: PDFPDF
Recent advances in linear classification have shown that for applications such as document classification, the training process can be extremely efficient. However, most of the existing training methods are designed by assuming that data can be stored in the computer memory. These methods cannot be easily applied to data larger ...
Keywords: linear classification, Block minimization methods, large-scale learning, support vector machines

8 published by ACM
August 2011 KDD '11: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Publisher: ACM
Bibliometrics:
Citation Count: 11
Downloads (6 Weeks): 4,   Downloads (12 Months): 17,   Downloads (Overall): 299

Full text available: PDFPDF
As the size of data sets used to build classifiers steadily increases, training a linear model efficiently with limited memory becomes essential. Several techniques deal with this problem by loading blocks of data from disk one at a time, but usually take a considerable number of iterations to converge to ...
Keywords: linear classification, large-scale learning, document classification

9
July 2011 IJCAI'11: Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Linear classification is a useful tool for dealing with large-scale data in applications such as document classification and natural language processing. Recent developments of linear classification have shown that the training process can be efficiently conducted. However, when the data size exceeds the memory capacity, most training methods suffer from ...

10
June 2011 CONLL Shared Task '11: Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 9
Downloads (6 Weeks): 3,   Downloads (12 Months): 12,   Downloads (Overall): 75

Full text available: PDFPDF
This paper presents Illinois-Coref , a system for coreference resolution that participated in the CoNLL-2011 shared task. We investigate two inference methods, Best-Link and All-Link , along with their corresponding, pairwise and structured, learning protocols. Within these, we provide a flexible architec-ture for incorporating linguistically-motivated constraints, several of which we ...

11
December 2010 The Journal of Machine Learning Research: Volume 11, 3/1/2010
Publisher: JMLR.org
Bibliometrics:
Citation Count: 56
Downloads (6 Weeks): 8,   Downloads (12 Months): 31,   Downloads (Overall): 635

Full text available: PDFPDF
Large-scale linear classification is widely used in many areas. The L1-regularized form can be applied for feature selection; however, its non-differentiability causes more difficulties in training. Although various optimization methods have been proposed in recent years, these have not yet been compared suitably. In this paper, we first broadly review ...

12
August 2010 The Journal of Machine Learning Research: Volume 11, 3/1/2010
Publisher: JMLR.org
Bibliometrics:
Citation Count: 39
Downloads (6 Weeks): 4,   Downloads (12 Months): 37,   Downloads (Overall): 302

Full text available: PDFPDF
Kernel techniques have long been used in SVM to handle linearly inseparable problems by transforming data to a high dimensional space, but training and testing large data sets is often time consuming. In contrast, we can efficiently train and test much larger data sets using linear SVM without kernels. In ...

13 published by ACM
July 2010 KDD '10: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Publisher: ACM
Bibliometrics:
Citation Count: 24
Downloads (6 Weeks): 2,   Downloads (12 Months): 9,   Downloads (Overall): 636

Full text available: PDFPDF
Recent advances in linear classification have shown that for applications such as document classification, the training can be extremely efficient. However, most of the existing training methods are designed by assuming that data can be stored in the computer memory. These methods cannot be easily applied to data larger than ...
Keywords: SVM, block minimization, large scale learning

14
August 2009 ACLShort '09: Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 2,   Downloads (12 Months): 12,   Downloads (Overall): 53

Full text available: PDFPDF
Maximum entropy (Maxent) is useful in many areas. Iterative scaling (IS) methods are one of the most popular approaches to solve Maxent. With many variants of IS methods, it is difficult to understand them and see the differences. In this paper, we create a general and unified framework for IS ...

15
June 2009 KDD-CUP'09: Proceedings of the 2009 International Conference on KDD-Cup 2009 - Volume 7
Publisher: JMLR.org
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 1,   Downloads (12 Months): 1,   Downloads (Overall): 1

Full text available: PDFPDF
This paper describes our ensemble of three classifiers for the KDD Cup 2009 challenge. First, we transform the three binary classification tasks into a joint multi-class classification problem, and solve an l1-regularized maximum entropy model under the LIBLINEAR framework. Second, we propose a heterogeneous base learner, which is capable of ...
Keywords: regularized maximum entropy model, AdaBoost, heterogeneous large dataset, selective naïve Bayes

16 published by ACM
August 2008 KDD '08: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Publisher: ACM
Bibliometrics:
Citation Count: 35
Downloads (6 Weeks): 1,   Downloads (12 Months): 27,   Downloads (Overall): 634

Full text available: PDFPDF
Efficient training of direct multi-class formulations of linear Support Vector Machines is very useful in applications such as text classification with a huge number examples as well as features. This paper presents a fast dual method for this training. The main idea is to sequentially traverse through the training set ...
Keywords: multi-class, support vector machines, text classification

17 published by ACM
July 2008 ICML '08: Proceedings of the 25th international conference on Machine learning
Publisher: ACM
Bibliometrics:
Citation Count: 209
Downloads (6 Weeks): 16,   Downloads (12 Months): 158,   Downloads (Overall): 1,739

Full text available: PDFPDF
In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. ...

18
June 2008 The Journal of Machine Learning Research: Volume 9, 6/1/2008
Publisher: JMLR.org
Bibliometrics:
Citation Count: 1,118
Downloads (6 Weeks): 25,   Downloads (12 Months): 190,   Downloads (Overall): 3,005

Full text available: PDFPDF
LIBLINEAR is an open source library for large-scale linear classification. It supports logistic regression and linear support vector machines. We provide easy-to-use command-line tools and library calls for users and developers. Comprehensive documents are available for both beginners and advanced users. Experiments demonstrate that LIBLINEAR is very efficient on large ...

19
June 2008 The Journal of Machine Learning Research: Volume 9, 6/1/2008
Publisher: JMLR.org
Bibliometrics:
Citation Count: 53
Downloads (6 Weeks): 3,   Downloads (12 Months): 24,   Downloads (Overall): 342

Full text available: PDFPDF
Linear support vector machines (SVM) are useful for classifying large-scale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM with the L2-loss function. At each step, the ...



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us