ABSTRACT
The idea of "familial relationships" among languages is well-established and accepted, although some controversies persist in a few specific instances. By painstakingly recording and identifying regularities and similarities and comparing these to the historical record, linguists have been able to produce a general "family tree" incorporating most natural languages.
We suggest here that much of these trees can be automatically determined by a complementary technique of distributional analysis. Recent work by (Farach et al., 1995) and (Juola, 1997) suggests that Kullback-Leibler divergence (or cross-entropy) can be meaningfully measured from small samples, in some cases as small as only 20 or so words. Using these techniques, we define and measure a distance function between translations of a small corpus (c. 70 words/sample) covering much of the accepted Indo-European family, and reconstruct a relationship tree by hierarchical cluster analysis. The resulting tree shows remarkable similarity to the accepted Indo-European family; this we read as evidence both for the immense power of this measurement technique and for the validity of this kind of mechanical similarity judgement in the identification of typological relationships. Furthermore, this technique is in theory sensitive to different sorts of relationships than more common word-list based methods and may help illuminate these from a different direction.
References
- Ronald Eaton Asher and J. M. Y. Simpson, editors. 1994. The Encyclopedia of Language and Linguistics. Pergamon, Oxford.Google Scholar
- Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Clarendon Press, Oxford. Google Scholar
Digital Library
- William Bright, editor. 1992. International Encyclopedia of Linguistics. Oxford University Press, Oxford.Google Scholar
- Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1). Google Scholar
Digital Library
- David Crystal. 1987. The Cambridge Encyclopedia of Language. Cambridge University Press, Cambridge, UK.Google Scholar
- Martin Farach, Michiel Noordewier, Serap Savari, Lary Shepp, Abraham Wyner, and Jacob Ziv. 1995. On the entropy of DNA: Algorithms and measurements based on memory and rapid convergence. In Proceedings of the 6th Annual Symposium on Discrete Algorithms (SODA95). ACM Press. Google Scholar
Digital Library
- Edward Finegan and Niko Besnier. 1987. Language, Its Structure and Use. Harcourt Brace Jovanovich, San Diego.Google Scholar
- Peter Forster, Alfred Toth, and Hans-Juergen Bandelt. in press. Phylogenetic network analysis of word lists. Journal of Quantitative Linguistics.Google Scholar
- H. A. Gleason. 1955. Introduction to Descriptive Linguistics. Holt, Rinehart and Winston, New York.Google Scholar
- Patrick Juola. 1997. What can we do with small corpora? Document categorization via cross-entropy. In Proceedings of an Interdisciplinary Workshop on Similarity and Categorization, Edinburgh, UK. Department of Artificial Intelligence, University of Edinburgh.Google Scholar
- Donald A. Ringe. 1992. On calculating the factor of chance in language comparison, volume 82 of Transactions of the American Philosophical Society. American Philosophical Society.Google Scholar
- Claude Elmwood Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379--423.Google Scholar
Digital Library
- Claude Elmwood Shannon. 1951. Prediction and entropy of printed English. Bell System Technical Journal, 30:50--64.Google Scholar
Cross Ref
- Morris Swadesh. 1955. Towards greater accuracy in lexicostatic dating. International Journal of American Linguistics, 21:121--37.Google Scholar
Cross Ref
- Tandy Warnow. 1997. Mathematical approaches to comparative linguistics. Proceedings of the National Academy of Sciences of the USA, 94:6585--90.Google Scholar
Cross Ref
- Abraham J. Wyner. in press. Entropy estimation and patterns.Google Scholar




Comments