LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Goldberg, Paul W (2001)
Publisher: Elsevier BV
Journal: Information and Computation
Languages: English
Types: Article
Subjects: Theoretical Computer Science, Computational Theory and Mathematics, Computer Science Applications, QA76, Information Systems
We investigate PAC-learning in a situation in which examples (consisting of an input vector and 0/1 label) have some of the components of the input vector concealed from the learner. This is a special case of Restricted Focus of Attention (RFA) learning. Our interest here is in 1-RFA learning, where only a single component of an input vector is given, for each example. We argue that 1-RFA learning merits special consideration within the wider field of RFA learning. It is the most restrictive form of RFA learning (so that positive results apply in general), and it models a typical "datafusion" scenario, where we have sets of observations from a number of separate sensors, but these sensors are uncorrelated sources. Within this setting we study the well-known class of linear threshold functions, the characteristic functions of Euclidean half-spaces. The sample complexity (i.e. sample-size requirement as a function of the parameters) of this learning problem is affected by the input distribution. We show that the sample complexity is always finite, for any given input distribution, but we also exhibit methods for defining "bad" input distributions for which the sample complexity can grow arbitrarily fast. We identify fairly general sufficient conditions for an input distribution to give rise to sample complexity that is polynomial in the PAC parameters e-1 and d-1. We give an algorithm (using an empirical e-cover) whose sample complexity is polynomial in these parameters and the dimension (number of inputs), for input distributions that satisfy our conditions. The runtime is polynomial in e-1 and d-1 provided that the dimension is any constant. We show how to adapt the algorithm to handle uniform misclassification noise.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • 5 4 3 2 y 1
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article