LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
França, M. V. M.; Zaverucha, G.; Garcez, A. (2014)
Publisher: Springer
Languages: English
Types: Article
Subjects: QA75
Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 % of features can be achieved with a small loss of accuracy.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Bain, M., & Muggleton, S. (1994). Learning optimal chess strategies. Machine Intelligence, 13, 291-309.
    • Basilio, R., Zaverucha, G., and Barbosa, V. (2001). Learning logic programs with neural networks. In Proc. ILP, LNAI 2157: 402-408. Springer.
    • Caruana, R., Lawrence, S., & Giles, C. L. (2000). Overfitting in neural nets: backpropagation, conjugate gradient, and early stopping. In Proc. NIPS, 13: 402-408. MIT Press.
    • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16 (1): 321- 357.
    • Clark, P., & Niblett, T. 1989. The CN2 induction algorithm. Machine Learning, 3: 261-283.
    • Copelli, M., Eichhorn, R., Kinouchi, O., Biehl, M., Simonetti, R., Riegler, P., & Caticha, N. (1997). Noise robustness in multilayer neural networks. EPL (Europhysics Letters), 37 (6): 427-432.
    • Craven, M., & Shavlik, J. W. (1995). Extracting tree-structured representations of trained networks. In Proc. NIPS, 9: 24-30. Cambridge, MA, USA: The MIT Press.
    • Davis, J., Burnside, E. S., Dutra, I. C., Page, D., & Costa, V. S. (2005). An integrated approach to learning Bayesian networks of rules. In Proc. ECML, LNAI 3720: 84-95. BerlinHeidelberg, Germany: Springer.
    • De Raedt, L. (2008). Logical and relational learning. Berlin-Heidelberg, Germany: Springer.
    • De Raedt, L., Frasconi, P., Kersting, K., & Muggleton, S. (2008). Probabilistic inductive logic programming. LNAI 4911. Berlin-Heidelberg, Germany: Springer.
    • DiMaio, F., & Shavlik, J. W. (2004). Learning an approximation to inductive logic programming clause evaluation. In Proc. ILP, LNAI 3194: 80-97. Springer.
    • Ding, C., & Peng, H. (2005). Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, 3 (2): 185-205.
    • Džeroski, S., & Lavracˇ, N. (2001). Relational data mining. Berlin-Heidelberg, Germany: Springer.
    • Garcez, A. S. D., & Zaverucha, G. (2012). Multi-instance learning using recurrent neural networks. In Proc. IJCNN. 1-6. IEEE.
    • Garcez, A. S. D., & Zaverucha, G. (1999). The connectionist inductive learning and logic programming system. Appllied Intelligence, 11: 59-77.
    • Garcez, A. S. D., Broda, K., & Gabbay, D. M. (2001). Symbolic knowledge extraction from trained neural networks: a sound approach. Artificial Intelligence, 125 (1-2): 155-207.
    • Garcez, A. S. D., Lamb, L. C., & Gabbay, D. M. (2008). Neural-symbolic cognitive reasoning, Berlin-Heidelberg, Germany: Springer.
    • Garcez, A. S. D., Broda, K. B., & Gabbay, D. M. (2002). Neural-symbolic learning systems. Berlin-Heidelberg, Germany: Springer.
    • Getoor, L., & Taskar, B. (2007). Introduction to statistical relational learning. Cambridge, MA, USA: The MIT Press.
    • Guillame-Bert, M., Broda, K., & Garcez, A. S. D. (2010). First-order logic learning in artificial neural networks. In Proc. IJCNN. 1-8. IEEE.
    • Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3: 1157-1182.
    • Haykin, S. S. (2009). Neural networks and learning machines. Upper Saddle River, NJ, USA: Prentice Hall.
    • Jacobs, R. A. (1988). Increased rates of convergence through learning rate adaptation. Neural Networks, 1 (4): 295-307.
    • Kijsirikul, B., & Lerdlamnaochai, B. K. (2005). First-order logical neural networks. International Journal of Hybrid Intelligent Systems, 2 (4): 253-267.
    • King, R. D., & Srinivasan, A. (1995). Relating chemical activity to structure: An examination of ILP successes. New Generation Computing, 13 (3-4): 411-434.
    • King, R. D., Whelan, K. E., Jones, F. M., Reiser, F. G. K., Bryant, C. H., Muggleton, S. H., Kell, D. B., & Oliver, S. G. (2004). Functional genomic hypothesis generation and experimentation by a robot scientist. Nature, 427 (6971): 247-252.
    • Koller, D., & Friedman, N. (2009). Probabilistic graphical models: principles and techniques. Cambridge, MA, USA: The MIT Press.
    • Kramer, S., Lavracˇ, N., & Flach, P. (2001). Propositionalization approaches to relational data mining. In S. Džeroski (Ed.), Relational Data Mining (pp. 262-291). New York, NY, USA: Springer.
    • Krogel, M. A., Rawles, S., Železný, F., Flach, P., Lavracˇ, N., & Wrobel, S. (2003). Comparative evaluation of approaches to propositionalization. In Proc. ILP, LNAI 2835: 197-214. Springer.
    • Krogel, M. A., & Wrobel, S. (2003). Facets of aggregation approaches to propositionalization, In Proc. ILP, LNAI 2835: 30-39. Springer.
    • Kuželka, O., & Železný, F. (2011). Block-wise construction of tree-like relational features with monotone reducibility and redundancy. Machine Learning, 83: 163-192.
    • Landwehr, N., Kersting, K., & De Raedt, L. D. (2007). Integrating naive Bayes and FOIL. Journal of Machine Learning Research, 8: 481-507.
    • Lavracˇ, N., & Džeroski, S. (1994). Inductive logic programming: techniques and applications. Chichester, UK: E. Horwood.
    • May, R., Dandy, G., & Maier, H. (2011). Review of input variable selection methods for artificial neural networks. In K. Suzuki (Ed.), Artificial Neural Networks - Methodological Advances and Biomedical Applications (pp. 19-44). InTech, doi:10.5772/16004.
    • Møller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks, 6 (4): 525-533.
    • Muggleton, S. (1995). Inverse entailment and Progol. New Generation Computing, 13 (3-4): 245-286.
    • Muggleton, S., & De Raedt, L. D. (1994). Inductive logic programming: theory and methods. Journal of Logic Programming, 19/20: 629-679.
    • Muggleton, S., Paes, A., Costa, V. S., & Zaverucha, G. (2010). Chess revision: Acquiring the rules of chess variants through FOL theory revision from examples. In Proc. ILP, LNAI 5989: 123-130. Springer.
    • Muggleton, S., & Tamaddoni-Nezhad, A. (2008). QG/GA: a stochastic search for Progol. Machine Learning, 70: 121-133.
    • Nienhuys-Cheng, S. H., & de Wolf, R. (1997). Foundations of inductive logic programming. LNAI 1228. Berlin-Heidelberg, Germany: Springer.
    • Paes, A., Revoredo, K., Zaverucha, G., & Costa, V. S. (2005). Probabilistic first-order theory revision from examples. In Proc. ILP, LNAI 3625: 295-311. Springer.
    • Paes, A., Zaverucha, G., & Costa, V. S. (2008). Revising first-order logic theories from examples through stochastic local search. In Proc. ILP, LNAI 4894: 200-210. Springer.
    • Paes, A., Železný, F., Zaverucha, G., Page, D., & Srinivasan, A. (2007). ILP through propositionalization and stochastic k-term DNF learning. In Proc. ILP, LNAI 4455: 379-393. Springer.
    • Perlich, C., & Merugu, S. (2005). Gene classification: issues and challenges for relational learning. In Proc. 4th International Workshop on Multi-Relational Mining, ACM. 61-67.
    • Pitangui, C. G., & Zaverucha, G. (2012). Learning theories using estimation distribution algorithms and (reduced) bottom clauses. In Proc. ILP, LNAI 7207: 286-301. Springer.
    • Prechelt, L. (1997). Early stopping - but when? In Neural networks: tricks of the trade, LNAI 1524 (2): 55-69. Springer.
    • Quinlan, J. R. (1993). C4.5: programs for machine learning. San Francisco, CA, USA: Morgan Kaufmann.
    • Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning, 62: 107- 136.
    • Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart, & J. L. McClelland (Eds.), Parallel distributed processing: explorations in the microstructure of cognition (pp. 318-362). Cambridge, MA, USA: MIT Press.
    • Rumelhart, D. E., Widrow, B., & Lehr, M. A. (1994). The basic ideas in neural networks. Communications of the ACM, 37 (3): 87-92.
    • Srinivasan, A. (2007). The Aleph System, version 5. http://www.cs.ox.ac.uk/activities/ machlearn/Aleph/aleph.html. Accessed 27 March 2013.
    • Srinivasan, A., & Muggleton, S. H. (1994). Mutagenesis: ILP experiments in a nondeterminate biological domain. In Proc. ILP, LNAI 237: 217-232.
    • Tamaddoni-Nezhad, A., & Muggleton, S. (2009). The lattice structure and refinement operators for the hypothesis space bounded by a bottom clause. Machine Learning, 76 (1): 37-72.
    • Uwents, W., Monfardini, G., Blockeel, H., Gori, M., & Scarselli, F. (2011). Neural networks for relational learning: an experimental comparison. Machine Learning, 82 (3): 315-349.
    • Železný, F., & Lavracˇ, N. (2006). Propositionalization-based relational subgroup discovery with RSD. Machine Learning, 62: 33-63.
  • No related research data.
  • No similar publications.
  • BioEntity Site Name
    SourceForge

Share - Bookmark

Cite this article