LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Shieber, Stuart; Lappin, Shalom (2007)
Publisher: Cambridge University Press
Types: Article
Subjects: PHI
In this paper, we explore the possibility that machine learning approaches to natural-language processing being developed in engineering-oriented computational linguistics may be able to provide specific scientific insights into the nature of human language. We argue that, in principle, machine learning results could inform basic debates about language, in one area at least, and that in practice, existing results may offer initial tentative support for this prospect. Further, results from computational learning theory can inform arguments carried on within linguistic theory as well.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Baker, Mark. 2001. The atoms of language: The mind's hidden rules of grammar. New York : Basic Books.
    • Bartlett, Peter L. 1998. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory 44. 525-536.
    • Berger, James O. 1985. Statistical decision theory and Bayesian analysis, 2nd edn. New York : Springer.
    • Blum, Avrim. 1996. On-line algorithms in machine learning. The Dagstuhl Workshop on Online Algorithms, 306-325. Dagstuhl.
    • Bouchard, Denis. 2005. Exaption and linguistic explanation. Lingua 115. 1685-1696.
    • Carroll, Glenn & Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Carl Weir, Stephen Abney, Ralph Grishman & Ralph Weischedel (eds.), Working notes of the Workshop on Statistically-based NLP Techniques, 1-13. Menlo Park, CA : AAAI Press.
    • Charniak, Eugene & Mark Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), 173-180. Ann Arbor, MI : Association for Computational Linguistics. http://www.aclweb.org/anthology/P/P05/P05-1022 (6 March 2007).
    • Chomsky, Noam. 1957. Syntactic structures. The Hague: Mouton.
    • Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA : MIT Press.
    • Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris.
    • Chomsky, Noam. 1995. The Minimalist program. Cambridge, MA : MIT Press.
    • Chomsky, Noam. 2000. New horizons in the study of language and mind. Cambridge: Cambridge University Press.
    • Chouinard, Michelle M. & Eve V. Clark 2003. Adult reformulations of child errors as negative evidence. Journal of Child Language 30. 637-669.
    • Clark, Alexander. 2003. Combining distributional and morphological information for part of speech induction. 10th Annual Meeting of the European Association for Computational Linguistics, 59-66. Budapest.
    • Clark, Alexander. 2004. Grammatical inference and the argument from the poverty of the stimulus. AAAI Spring Symposium on Interdisciplinary Approaches to Language Learning. Stanford, CA.
    • Clark, Alexander. 2006. Pac-learning unambiguous NTS languages. 8th International Colloquium on Grammatical Inference, 59-71. Berlin : Springer.
    • Clark, Alexander & Remi Eyraud 2006. Learning auxiliary fronting with grammatical inference. 8th Conference on Computational Language Learning (conll-x), 125-132. New York.
    • Clark, Alexander & Franck Thollard. 2004. Partially distribution-free learning of regular languages from positive samples. COLING 2004, 85-91. Geneva.
    • Collins, Michael. 1999. Head-driven statistical models for natural language parsing. Ph.D. dissertation, University of Pennsylvania.
    • Cortes, Corinna & Vladimir Vapnik 1995. Support-vector networks. Machine Learning 20. 273-297.
    • Crain, Stephen. 1991. Language acquisition in the absence of experience. Behavioral and Brain Sciences 14. 597-650.
    • Fodor, Jerry & Zenon Pylyshyn. 1988. Connectionism and cognitive architecture : A critique. Cognition 28. 3-71.
    • Gold, E. Mark. 1967. Language identification in the limit. Information and Control 10. 447-474.
    • Goldsmith, John. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics 27. 153-198.
    • Good, I. 1953. The population frequencies of species and the estimation of population parameters. Biometrika 40. 237-264.
    • Goodman, Joshua. 1996. Parsing algorithms and metrics. In Arivind Joshi & Martha Palmer (eds.), The Thirty-Fourth Annual Meeting of the Association for Computational Linguistics, 177-183. San Francisco : Morgan Kaufmann Publishers.
    • Johnson, David E. & Shalom Lappin. 1999. Local constraints vs. economy (Stanford Monographs in Linguistics). Stanford, CA : CSLI.
    • Johnson, Kent. 2004. Gold's theorem and cognitive science. Philosophy of Science 71. 571-592.
    • Jurafsky, Daniel & James Martin. 2000. Speech and language processing. Upper Saddle River, NJ: Prentice Hall.
    • Klein, Dan & Christopher Manning. 2002. A generative constituent-context model for improved grammar induction. 40th Annual Meeting of the Association for Computational Linguistics, 128-135. Philadelphia, PA.
    • Klein, Dan & Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. 42th Annual Meeting of the Association for Computational Linguistics, 479-486. Barcelona.
    • Lappin, Shalom. 2005. Machine learning and the cognitive basis of natural language. Computational Linguistics in the Netherlands 2004, 1-11. Leiden.
    • Legate, Julie A. & Charles D. Yang. 2002. Empirical reassessment of stimulus poverty arguments. The Linguistic Review 19. 151-162.
    • Lewis, John & Jeffrey Elman. 2002. Learnability and the statistical structure of language: Poverty of stimulus arguments revisited. 26th Annual Boston University Conference on Language Development, 359-370. Somerville, MA.
    • MacWhinney, Brian. 1995. The CHILDES project : Tools for analyzing talk, 2nd edn. Hillsdale, NJ: Lawrence Erlbaum.
    • Manning, Christopher & Hinrich Schu¨ tze. 1999. Foundations of statistical language processing. Cambridge, MA : MIT Press.
    • Marcus, Mitchell. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19. 313-330.
    • Newmeyer, Frederick J. 2004. Against a parameter setting approach to typological variation. Linguistic Variation Year Book 4, 181-234. Amsterdam & Philadelphia : John Benjamins.
    • Newmeyer, Frederick J. 2006. A rejoinder to ' On the role of parameters in Universal Grammar: A reply to Newmeyer ' by Ian Roberts & Anders Holmberg. Ms., Department of Linguistics, University of Washington, Seattle, WA.
    • Niyogi, Partha. 2006. The computational nature of language learning and evolution. Cambridge, MA: MIT Press.
    • Niyogi, Partha & Robert C. Berwick. 1996. A language learning model for finite parameter spaces. Cognition 61. 161-193.
    • Nowak, Martin A., Natalia L. Komarova & Partha Niyogi. 2002. Computational and evolutionary aspects of language. Nature 417. 611-617.
    • Pereira, Fernando. 2000. Formal grammar and information theory: Together again ? Philosophical Transactions of the Royal Society, 1239-1253. London: Royal Society.
    • Perfors, Amy, Joshua B. Tenenbaum & Terry Regier. 2006. Poverty of the stimulus? A rational approach. 28th Annual Conference of the Cognitive Science Society, 663-668. Vancouver.
    • Poggio, Tomaso, Ryan Rifkin, Sayan Mukherjee & Partha Niyogi. 2004. General conditions for predictivity in learning theory. Nature 428. 419-422.
    • Postal, Paul M. 2004. Sceptical linguistic essays. Oxford : Oxford University Press.
    • Pullum, Geoffrey K. & Barbara Scholz. 2002. Empirical assessment of stimulus poverty arguments. The Linguistic Review 19. 9-50.
    • Roberts, Ian & Anders Holmberg. 2005. On the role of parameters in Universal Grammar : A reply to Newmeyer. In Hans Broekhuis, Norbert Corver, Riny Huybregts, Ursula Kleinhenz & Jan Koster (eds.), Organizing grammar : Linguistic studies in honor of Henk van Riemsdijk, 538-553. Berlin : Mouton de Gruyter.
    • Saxton, Matthew, Carmel Houston-Price & Natasha Dawson. 2005. The prompt hypothesis: Clarification requests as corrective input for grammatical errors. Applied Psycholinguistics 26. 393-414.
    • Scholz, Barbara & Geoffrey K. Pullum. 2002. Searching for arguments to support linguistic nativism. The Linguistic Review 19. 185-223.
    • Scholz, Barbara & Geoffrey K. Pullum. 2006. Irrational nativist exuberance. In Rob Stainton (ed.), Contemporary debates in cognitive science, 59-80. Oxford : Blackwell.
    • Schone, Patrick & Daniel Jurafsky. 2001. Knowledge-free induction of inflectional morphologies. The North American Chapter of the Association for Computational Linguistics (NAACL2001). Pittsburgh, PA.
    • Sch u¨tze, Hinrich. 1995. Distributional part-of-speech tagging. The European Chapter of the Association for Computational Linguistics (EACL 7), 141-148. Dublin.
    • Shannon, Claude E. 1948. A mathematical theory of communication. The Bell System Technical Journal 27. 379-423. 623-656.
    • Shinohara, Takeshi. 1994. Rich classes inferable from positive data: Length-bounded elementary formal systems. Information and Computation 108. 175-186.
    • Valiant, Leslie G. 1984. A theory of the learnable. The Sixteenth Annual ACM Symposium on Theory of Computing (STOC '84), 436-445. New York : ACM Press.
    • Vapnik, Vladimir. 1998. Statistical learning theory. New York: Wiley.
    • Vapnik, Vladimir & Alexey Chervonenkis. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16. 264-280.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article