Author image not provided
 Frederick Jelinek

Authors:
Add personal information
  Affiliation history
Bibliometrics: publication history
Average citations per article32.24
Citation Count1,322
Publication count41
Publication years1977-2010
Available for download22
Average downloads per article515.14
Downloads (cumulative)11,333
Downloads (12 Months)1,259
Downloads (6 Weeks)168
SEARCH
ROLE
Arrow RightAuthor only
· Advisor only
· All roles


AUTHOR'S COLLEAGUES
See all colleagues of this author

SUBJECT AREAS
See all subject areas




BOOKMARK & SHARE


41 results found Export Results: bibtexendnoteacmrefcsv

Result 1 – 20 of 41
Result page: 1 2 3

Sort by:

1
June 2010 HLT '10: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 1,   Downloads (12 Months): 7,   Downloads (Overall): 67

Full text available: PDFPDF
In this paper we propose a novel general framework for unsupervised model adaptation. Our method is based on entropy which has been used previously as a regularizer in semi-supervised learning. This technique includes another term which measures the stability of posteriors w.r.t model parameters, in addition to conditional entropy. The ...

2
June 2010 HLT '10: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 8
Downloads (6 Weeks): 2,   Downloads (12 Months): 20,   Downloads (Overall): 209

Full text available: PDFPDF
Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a ...

3
December 2009 Computational Linguistics: Volume 35 Issue 4, December 2009
Publisher: MIT Press
Bibliometrics:
Citation Count: 4
Downloads (6 Weeks): 3,   Downloads (12 Months): 8,   Downloads (Overall): 165

Full text available: PDFPDF

4
August 2009 TSD '09: Proceedings of the 12th International Conference on Text, Speech and Dialogue
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

Practical automatic speech recognition is of necessity a (near) real time activity performed by a system whose structure is fixed and whose parameters once trained may be adapted on the basis of the speech that the system observed during recognition. However, in specially important situations (e.g., recovery of out-of-vocabulary words) ...

5
August 2009 EMNLP '09: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 1
Downloads (6 Weeks): 2,   Downloads (12 Months): 12,   Downloads (Overall): 149

Full text available: PDFPDF
While speaking spontaneously, speakers often make errors such as self-correction or false starts which interfere with the successful application of natural language processing techniques like summarization and machine translation to this data. There is active work on reconstructing this errorful data into a clean and fluent transcript by identifying and ...

6
August 2009 ACL '09: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 1,   Downloads (12 Months): 9,   Downloads (Overall): 43

Full text available: PDFPDF
Spontaneously produced speech text often includes disfluencies which make it difficult to analyze underlying structure. Successful reconstruction of this text would transform these errorful utterances into fluent strings and offer an alternate mechanism for analysis. Our investigation of naturally-occurring spontaneous speaker errors aligned to corrected text with manual semantico-syntactic analysis ...

7
March 2009 EACL '09: Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 2
Downloads (6 Weeks): 5,   Downloads (12 Months): 18,   Downloads (Overall): 138

Full text available: PDFPDF
This paper presents a conditional random field-based approach for identifying speaker-produced disfluencies (i.e. if and where they occur) in spontaneous speech transcripts. We emphasize false start regions, which are often missed in current disfluency identification approaches as they lack lexical or structural similarity to the speech immediately following. We find ...

8
September 2008 TSD '08: Proceedings of the 11th international conference on Text, Speech and Dialogue
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

The n-gram model is standard for large vocabulary speech recognizers. Many attempts were made to improve on it. Language models were proposed based on grammatical analysis, artificial neural networks, random forests, etc. While the latter give somewhat better recognition results than the n-gram model, they are not practical, particularly when ...

9
September 2007 TSD'07: Proceedings of the 10th international conference on Text, speech and dialogue
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

In the past, Maximum Entropy based language models were constrained by training data n-gram counts, topic estimates, and triggers. We will investigate the obtainable gains from imposing additional constraints related to linguistic clusters, such as parts of speech, semantic/syntactic word clusters, and semantic labels. It will be shown that there ...

10
September 2005 TSD'05: Proceedings of the 8th international conference on Text, Speech and Dialogue
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 0

L. Breiman recently introduced the concept of random forests (randomly constructed collection of decision trees) for classification. We have modified the method for regression and applied it to language modeling for speech recognition. Random forests achieve excellent results in both perplexity and error rate. They can be regarded as a ...

11
September 2005 Machine Learning: Volume 60 Issue 1-3, September 2005
Publisher: Kluwer Academic Publishers
Bibliometrics:
Citation Count: 7

This paper presents a study of using neural probabilistic models in a syntactic based language model. The neural probabilistic model makes use of a distributed representation of the items in the conditioning history, and is powerful in capturing long dependencies. Employing neural network based models in the syntactic based language ...
Keywords: parsing, neural networks, speech recognition, statistical language models

12
December 2004 NIPS'04: Proceedings of the 17th International Conference on Neural Information Processing Systems
Publisher: MIT Press
Bibliometrics:
Citation Count: 0

In this paper, we explore the use of Random Forests (RFs) in the structured language model (SLM), which uses rich syntactic information in predicting the next word based on words already seen. The goal in this work is to construct RFs by randomly growing Decision Trees (DTs) using syntactic information ...

13
July 2003 EMNLP '03: Proceedings of the 2003 conference on Empirical methods in natural language processing
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 6
Downloads (6 Weeks): 0,   Downloads (12 Months): 10,   Downloads (Overall): 163

Full text available: PDFPDF
We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, ...

14
July 2002 ACL '02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 15
Downloads (6 Weeks): 0,   Downloads (12 Months): 10,   Downloads (Overall): 157

Full text available: PDFPDF
We study the impact of richer syntactic dependencies on the performance of the structured language model (SLM) along three dimensions: parsing accuracy (LP/LR), perplexity (PPL) and word-error-rate (WER, N-best re-scoring). We show that our models achieve an improvement in LP/LR, PPL and/or WER over the reported baseline results using the ...

15
March 2001 HLT '01: Proceedings of the first international conference on Human language technology research
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 0
Downloads (6 Weeks): 0,   Downloads (12 Months): 3,   Downloads (Overall): 66

Full text available: PDFPDF
As a by-product of the recent information explosion, the same basic facts are often available from multiple sources such as the Internet, television, radio and newspapers. We present here a project currently in its early stages that aims to take advantage of the redundancies in parallel sources to achieve robustness ...

16
October 2000 Computer Speech and Language: Volume 14 Issue 4, October 2000
Publisher: Academic Press Ltd.
Bibliometrics:
Citation Count: 7

This paper presents an attempt at using the syntactic structure in natural language for improved language models for speech recognition. The structured language model merges techniques in automatic parsing and language modeling using an original probabilistic parameterization of a shift-reduce parser. A maximum likelihood re-estimation procedure belonging to the class ...

17
September 1999 TSD '99: Proceedings of the Second International Workshop on Text, Speech and Dialogue
Publisher: Springer-Verlag
Bibliometrics:
Citation Count: 4

We describe read speech and broadcast news corpora collected as part of a multi-year international collaboration for the development of large vocabulary speech recognition systems in the Czech language. Initial investigations into language modeling for Czech automatic speech recognition are described and preliminary recognition results on the read speech corpus ...

18
March 1999 ICASSP '99: Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 01
Publisher: IEEE Computer Society
Bibliometrics:
Citation Count: 5

In state-of-art large vocabulary continuous speech recognition (LVCSR) systems, HMM state-tying is often used to achieve good balance between the model resolution and robustness. In this paradigm, tied HMM states share a single set of parameters and are nondistinguishable. To capture the fine differences among tied HMM states, a probabilistic ...

19
August 1998 ACL '98/COLING '98: Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1
Publisher: Association for Computational Linguistics
Bibliometrics:
Citation Count: 53
Downloads (6 Weeks): 2,   Downloads (12 Months): 20,   Downloads (Overall): 532

Full text available: PDFPDF
The paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history, thus enabling the use of long distance dependencies. The model assigns probability to every joint sequence of words-binary-parse-structure with headword annotation and operates in a left-to-right manner --- therefore ...

20
February 1998
Bibliometrics:
Citation Count: 404




The ACM Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Terms of Usage   Privacy Policy   Code of Ethics   Contact Us