OpenAIRE is about to release its new face with lots of new content and services.
During September, you may notice downtime in services, while some functionalities (e.g. user registration, login, validation, claiming) will be temporarily disabled.
We apologize for the inconvenience, please stay tuned!
For further information please contact helpdesk[at]openaire.eu

fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Zhou, Deyu; He, Yulan (2014)
Publisher: Hindawi Publishing Corporation
Journal: The Scientific World Journal
Languages: English
Types: Article
Subjects: Research Article, Science (General), Q1-390, Article Subject
Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] J. Dowding, R. Moore, F. Andry, and D. Moran, “Interleaving syntax and semantics in an eficient bottom-up parser,” in Proceedings of the 32th Annual Meeting of the Association for Computational Linguistics, pp. 110-116, Las Cruces, NM, USA, 1994.
    • [2] W. Ward and S. Issar, “Recent improvements in the cmu spoken language understanding system,” in Proceedings of the Workshop on Human Language Technology, pp. 213-216, Plainsboro, NJ, USA, 1994.
    • [3] J. D. Laefrty, A. McCallum, and F. C. N. Pereira, “Conditional random efilds: probabilistic models for segmenting and labeling sequence data,” in Proceedings of the 18th International Conference on Machine Learning (ICML '11), pp. 282-289, 2001.
    • [4] Y. Altun, I. Tsochantaridis, and T. Hofmann, “Hidden markov support vector machines,” in Proceedings of the International Conference in Machine Learning, pp. 3-10, 2003.
    • [5] Y. He and S. Young, “Semantic processing using the hidden vector state model,” Computer Speech and Language, vol. 19, no. 1, pp. 85-106, 2005.
    • [6] R. J. Kate and R. J. Mooney, “Using string-kernels for learning semantic parsers,” in Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (ACL '06), pp. 913-920, 2006.
    • [7] Y. W. Wong and R. J. Mooney, “Learning synchronous grammars for semantic parsing with lambda calculus,” in Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL '07), pp. 960-967, June 2007.
    • [8] W. Lu, H. Ng, W. Lee, and L. Zettlemoyer, “A generative model for parsing natural language to meaning representations,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP '08), pp. 783-792, Stroudsburg, PA, USA, October 2008.
    • [9] R. Ge and R. Mooney, “Learning a compositional semantic parser using an existing syntactic parser,” in Proceedings of the 47th Annual Meeting of the ACL, pp. 611-619, 2009.
    • [10] M. Dinarelli, A. Moschitti, and G. Riccardi, “Discriminative reranking for spoken language understanding,” IEEE Transactions on Audio, Speech and Language Processing, vol. 20, no. 2, pp. 526-539, 2012.
    • [11] L. S. Zettlemoyer and C. Michael, “Learning to map sentences to logical form: structured classification with probabilistic categorial grammars,” in Proceedings of the 21st Conference on Uncertainty in Articfiial Intelligence (UAI '05) , pp. 658-666, July 2005.
    • [12] A. Giordani and A. Moschitti, “Syntactic structural kernels for natural language interfaces to databases,” in Machine Learning and Knowledge Discovery in Databases, W. Buntine, M. Grobelnik, D. Mladeni, and J. Shawe-Taylor, Eds., vol. 5781 of Lecture Notes in Computer Science, pp. 391-406, Springer, Berlin, Germany, 2009.
    • [13] A. Giordani and A. Moschitti, “Translating questions to SQL queries with generative parsers discriminatively reranked,” in Proceedings of the 24th International Conference on Computational Linguistics, pp. 401-410, 2012.
    • [14] J. Zelle and R. Mooney, “Learning to parse database queries using inductive logic programming,” in Proceedings of the AAAI, pp. 1050-1055, 1996.
    • [15] F. Jiao, S. Wang, C.-H. Lee, R. Greiner, and D. Schuurmans, “Semi-supervised conditional random fields for improved sequence segmentation and labeling,” in Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL '06), pp. 209-216, July 2006.
    • [16] G. S. Mann and A. McCallum, “Eficient computation of entropy gradient for semi-supervised conditional random fields,” in Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL '07), pp. 109-112, 2007.
    • [17] Y. Wang, G. Hafari, S. Wang, and G. Mori, “A rate distortion approach for semi-supervised conditional random fields,” in Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS '09), pp. 2008-2016, December 2009.
    • [18] G. S. Mann and A. Mccallum, “Generalized expectation criteria for semi-supervised learning of conditional random fields,” in Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pp. 870-878, June 2008.
    • [19] T. Grenager, D. Klein, and C. D. Manning, “Unsupervised learning of field segmentation models for information extraction,” in Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), pp. 371-378, Ann Arbor, Mich, USA, June 2005.
    • [20] J. A. Bilmes, “A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models,” in Proceedings of the International Conference on Systems Integration, 1997.
    • [21] S. Shalev-Shwartz, Y. Singer, and N. Srebro, “Pegasos: primal estimated sub-gradient solver for svm,” in Proceedings of the 24th International Conference on Machine Learning (ICML '07), pp. 807-814, June 2007.
    • [22] J. Nocedal, “Updating quasi-newton matrices with limited storage,” Mathematics of Computation, vol. 35, no. 151, pp. 773- 782, 1980.
    • [23] D. Zhou and Y. He, “A hybrid generative/discriminative framework to train a semantic parser from an un-annotated corpus,” in Proceeding of the 22nd International Conference on Computational Linguistics (COLING '08), pp. 1113-1120, Manchester, UK, August 2008.
    • [24] D. Zhou and Y. He, “Discriminative training of the hidden vector state model for semantic parsing,” IEEE Transactions on Knowledge and Data Engineering, vol. 21, no. 1, pp. 66-77, 2009.
    • [25] H. K. J. Kuo, E. Fosler-Lussier, H. Jiang, and C.-H. Lee, “Discriminative training of language models for speech recognition,” in Proceedings of the IEEE International Conference on Acustics, Speech, and Signal Processing (ICASSP '02), vol. 1, pp. 325-328, IEEE, Merano, Italy, May 2002.
    • Hindawi Publishing Corporation ht p:/ www.hindawi.com Volume 2014
  • No related research data.
  • No similar publications.
Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok