Author image not provided
 Sinnojialin Pan

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article30.17
Citation Count1,267
Publication count42
Publication years2007-2017
Available for download13
Average downloads per article507.92
Downloads (cumulative)6,603
Downloads (12 Months)1,022
Downloads (6 Weeks)124
SEARCH
ROLE
Arrow RightAuthor only


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


42 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 20 of 42
Result page: 1 2 3

Sort by:

1 published by ACM
October 2017 ACM Transactions on Intelligent Systems and Technology (TIST) - Regular Papers: Volume 9 Issue 2, January 2018
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 18,   Downloads (12 Months): 59,   Downloads (Overall): 59

Full text available: PDFPDF
Transfer learning has gained a lot of attention and interest in the past decade. One crucial research issue in transfer learning is how to find a good representation for instances of different domains such that the divergence between domains can be reduced with the new representation. Recently, deep learning has ...
Keywords: Double encoding-layer autoencoder, distribution difference measure, representation learning

2
August 2017 IJCAI'17: Proceedings of the 26th International Joint Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

In multi-task learning (MTL), tasks are learned jointly so that information among related tasks is shared and utilized to help improve generalization for each individual task. A major challenge in MTL is how to selectively choose what to share among tasks. Ideally, only related tasks should share information with each ...

3 published by ACM
August 2017 KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 44,   Downloads (12 Months): 251,   Downloads (Overall): 251

Full text available: PDFPDF
Multi-task learning aims to learn multiple tasks jointly by exploiting their relatedness to improve the generalization performance for each task. Traditionally, to perform multi-task learning, one needs to centralize data from all the tasks to a single machine. However, in many real-world applications, data of different tasks may be geo-distributed ...
Keywords: distributed multi-task learning, transfer learning

4 published by ACM
October 2016 CIKM '16: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 4,   Downloads (12 Months): 79,   Downloads (Overall): 114

Full text available: PDFPDF
In the past decade, there have been a large number of transfer learning algorithms proposed for various real-world applications. However, most of them are vulnerable to negative transfer since their performance is even worse than traditional supervised models. Aiming at more robust transfer learning models, we propose an ENsemble framework ...
Keywords: classifcation, transfer learning

5
July 2016 IJCAI'16: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Learning to hash has become a crucial technique for big data analytics. Among existing methods, supervised learning approaches play an important role as they can produce compact codes and enable semantic search. However, the size of an instance-pairwise similarity matrix used in most supervised hashing methods is quadratic to the ...

6
July 2016 IJCAI'16: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Accurate user profiling is important for an online recommender system to provide proper personalized recommendations to its users. In many real-world scenarios, the user's interests towards the items may change over time. Therefore, a dynamic and evolutionary user profile is needed. In this work, we come up with a novel ...

7
July 2016 IJCAI'16: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Most existing learning to hash methods assume that there are sufficient data, either labeled or unlabeled, on the domain of interest (i.e., the target domain) for training. However, this assumption cannot be satisfied in some real-world applications. To address this data sparsity issue in hashing, inspired by transfer learning, we ...

8
February 2016 AAAI'16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 0

Most existing heterogeneous transfer learning (HTL) methods for cross-language text classification rely on sufficient cross-domain instance correspondences to learn a mapping across heterogeneous feature spaces, and assume that such correspondences are given in advance. However, in practice, correspondences between domains are usually unknown. In this case, extensively manual efforts are ...

9
July 2015 IJCAI'15: Proceedings of the 24th International Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 4

Transfer learning has attracted a lot of attention in the past decade. One crucial research issue in transfer learning is how to find a good representation for instances of different domains such that the divergence between domains can be reduced with the new representation. Recently, deep learning has been proposed ...

10
September 2014 ECMLPKDD'14: Proceedings of the 2014th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part III
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 1

Knowledge transfer from multiple source domains to a target domain is crucial in transfer learning. Most existing methods are focused on learning weights for different domains based on the similarities between each source domain and the target domain or learning more precise classifiers from the source domain data jointly by ...
Keywords: consensus regularization, feature representation, transfer learning, multiple sources

11
July 2014 AAAI'14: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 4

Transfer learning uses relevant auxiliary data to help the learning task in a target domain where labeled data is usually insufficient to train an accurate model. Given appropriate auxiliary data, researchers have proposed many transfer learning models. How to find such auxiliary data, however, is of little research so far. ...

12
July 2014 AAAI'14: Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 7

Most previous heterogeneous transfer learning methods learn a cross-domain feature mapping between heterogeneous feature spaces based on a few cross-domain instance-correspondences, and these corresponding instances are assumed to be representative in the source and target domains respectively. However, in many real-world scenarios, this assumption may not hold. As a result, ...

13
June 2014 ICML'14: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32
Publisher: JMLR.org
Bibliometrics:
Citation Count: 0

Low rank matrix recovery is a fundamental task in many real-world applications. The performance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method ...

14
May 2014 IEEE Transactions on Knowledge and Data Engineering: Volume 26 Issue 5, May 2014
Publisher: IEEE Educational Activities Department
Bibliometrics:
Citation Count: 14

Domain transfer learning, which learns a target classifier using labeled data from a different distribution, has shown promising value in knowledge discovery yet still been a challenging problem. Most previous works designed adaptive classifiers by exploring two learning strategies independently: distribution adaptation and label propagation. In this paper, we propose ...

15 published by ACM
October 2013 CIKM '13: Proceedings of the 22nd ACM international conference on Information & Knowledge Management
Publisher: ACM
Bibliometrics:
Citation Count: 11
Downloads (6 Weeks): 5,   Downloads (12 Months): 90,   Downloads (Overall): 847

Full text available: PDFPDF
The study of users' social behaviors has gained much research attention since the advent of various social media such as Facebook, Renren and Twitter. A major kind of applications is to predict a user's future activities based on his/her historical social behaviors. In this paper, we focus on a fundamental ...
Keywords: social network analysis, prediction, user activity

16 published by ACM
August 2013 ASONAM '13: Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
Publisher: ACM
Bibliometrics:
Citation Count: 3
Downloads (6 Weeks): 2,   Downloads (12 Months): 15,   Downloads (Overall): 122

Full text available: PDFPDF
In this paper, we propose a new framework for opinion summarization based on sentence selection. Our goal is to assist users to get helpful opinion suggestions from reviews by only reading a short summary with few informative sentences, where the quality of summary is evaluated in terms of both aspect ...

17 published by ACM
August 2013 MLIS '13: Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Publisher: ACM
Bibliometrics:
Citation Count: 1

Transfer learning has attracted increasing attention in machine learning, data mining, and many application areas. It is well-known that features sampled from different domains may differ tremendously in their distributions, or that labels across different tasks may be different. Consequently, a model trained on one domain or task cannot be ...
Keywords: transfer learning

18
July 2013 AAAI'13: Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence
Publisher: AAAI Press
Bibliometrics:
Citation Count: 7

Recommender systems, especially the newly launched ones, have to deal with the data-sparsity issue, where little existing rating information is available. Recently, transfer learning has been proposed to address this problem by leveraging the knowledge from related recommender systems where rich collaborative data are available. However, most previous transfer learning ...

19
May 2013 ICSE '13: Proceedings of the 2013 International Conference on Software Engineering
Publisher: IEEE Press
Bibliometrics:
Citation Count: 41
Downloads (6 Weeks): 15,   Downloads (12 Months): 168,   Downloads (Overall): 738

Full text available: PDFPDF
Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings. However, for new projects or projects with limited training data, it is desirable to learn a prediction model by using sufficient training data from existing source projects and then apply the model to some ...

20 published by ACM
May 2013 ACM Transactions on Information Systems (TOIS): Volume 31 Issue 2, May 2013
Publisher: ACM
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 3,   Downloads (12 Months): 36,   Downloads (Overall): 410

Full text available: PDFPDF
Named Entity Recognition (NER) is a fundamental task in information extraction from unstructured text. Most previous machine-learning-based NER systems are domain-specific, which implies that they may only perform well on some specific domains (e.g., Newswire ) but tend to adapt poorly to other related but different domains (e.g., Weblog ). ...
Keywords: multiclass classification, transfer learning, Named entity recognition



The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us