LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Cao, Y.; Rockett, P.I. (2015)
Publisher: Elsevier
Languages: English
Types: Article
Subjects:
We propose the use of Vapnik's vicinal risk minimization (VRM) for training decision trees to approximately maximize decision margins. We implement VRM by propagating uncertainties in the input attributes into the labeling decisions. In this way, we perform a global regularization over the decision tree structure. During a training phase, a decision tree is constructed to minimize the total probability of misclassifying the labeled training examples, a process which approximately maximizes the margins of the resulting classifier. We perform the necessary minimization using an appropriate meta-heuristic (genetic programming) and present results over a range of synthetic and benchmark real datasets. We demonstrate the statistical superiority of VRM training over conventional empirical risk minimization (ERM) and the well-known C4.5 algorithm, for a range of synthetic and real datasets. We also conclude that there is no statistical difference between trees trained by ERM and using C4.5. Training with VRM is shown to be more stable and repeatable than by ERM.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • [1] L. Hyafil, R. L. Rivest, Constructing optimal binary decision trees is NP-complete, Information Processing Letters 5 (1) (1976) 15-17.
    • [2] J. R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann, 1993.
    • [3] R. Barros, M. Basgalupp, A. C. P. L. F. de Carvalho, A. Freitas, A survey of evolutionary algorithms for decision-tree induction, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 42 (3) (2012) 291-312.
    • [9] O. Chapelle, J. Weston, L. Bottou, V. Vapnik, Vicinal risk minimization, in: T. K. Leen, T. G. Dietterich, V. Tresp (Eds.), Advances in Neural Information Processing Systems 13 (NIPS 2000), Denver, CO, 2000, pp. 416-422.
    • [10] R. O. Duda, P. E. Hart, D. G. Stork, Pattern Recognition, 2nd Edition, John Wiley & Sons, New York, 2001.
    • [11] T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd Edition, SpringerVerlag, 2009.
    • [12] J. H. Friedman, A recursive partitioning decision rule for nonparametric classification, IEEE Transactions on Computers 26 (4) (1977) 404-408.
    • [13] S. Yuksel, J. Wilson, P. Gader, Twenty years of mixture of experts, IEEE Transactions on Neural Networks and Learning System 23 (8) (2012) 1177-1193.
    • [14] O. I˙rsoy, O. T. Yildiz, E. Alpaydin, Soft decision trees, in: 21st International Conference on Pattern Recognition (ICPR 2012), Tsukuba, Japan, 2012, pp. 1819-1822.
    • [15] O. T. Yildiz, E. Alpaydin, Regularizing soft decision trees, in: 28th International Symposium on Computer and Information Sciences (ISCIS 2013), Paris, France, 2013, pp. 15-21.
    • [16] O. T. Yildiz, Univariate decision tree induction using maximum margin classification, Computer Journal 55 (3) (2012) 293-298.
    • [17] D. Wu, K. P. Bennett, N. Cristianini, J. Shawe-Taylor, Large margin trees for induction and transduction, in: 16th International Conference on Machine Learning (ICML '99), Bled, Slovenia, 1999, pp. 474-483.
    • [18] R. Tibshirani, T. Hastie, Margin trees for high-dimensional classification, Journal of Machine Learning Research 8 (2007) 637-652.
    • [19] R. Poli, W. B. Langdon, N. F. McPhee, A Field Guide to Genetic Programming, Published via http://lulu.com and freely available at http://www.gp-field-guide.org.uk, 2008.
    • [20] S. Silva, S. Dignum, L. Vanneschi, Operator equalisation for bloat free genetic programming and a survey of bloat control methods, Genetic Programming and Evolvable Machines 13 (2) (2012) 197-238.
    • [29] S. Haruyama, Z. Qiangfu, Designing smaller decision trees using multiple objective optimization based GPs, in: IEEE International Conference on Systems, Man and Cybernetics, Vol. 6, Yasmine Hammamet, Tunisia, 2002, p. 5.
    • [30] N. R. Draper, H. Smith, Applied Regression Analysis, 3rd Edition, John Wiley & Sons, New York, 1998.
    • [31] J. Demˇsar, Statistical comparisons of classifiers over multiple data sets, Journal of Machine Learning Research 7 (2006) 1-30.
    • [32] R. Iman, J. Davenport, Approximations of the critical region of the Friedman statistic, Communications in Statistics-Theory and Methods 9 (6) (1980) 571-595.
    • [33] S. Holm, A simple sequentially rejective multiple test procedure., Scandinavian Journal of Statistics 6 (2) (1979) 65-70.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article