Efficiency and computational limitations of learning algorithms - Info and Reading Options
By Vitaly Feldman
"Efficiency and computational limitations of learning algorithms" was published in 2007 - 2007, it has 148 pages and the language of the book is English.
“Efficiency and computational limitations of learning algorithms” Metadata:
- Title: ➤ Efficiency and computational limitations of learning algorithms
- Author: Vitaly Feldman
- Language: English
- Number of Pages: 148
- Publish Date: 2007
- Publish Location: 2007
Edition Specifications:
- Pagination: ix, 148 leaves
Edition Identifiers:
- The Open Library ID: OL56921434M - OL41917370W
- Online Computer Library Center (OCLC) ID: 232370455
AI-generated Review of “Efficiency and computational limitations of learning algorithms”:
"Efficiency and computational limitations of learning algorithms" Description:
The Open Library:
This thesis presents new positive and negative results concerning the learnability of several well-studied function classes in the Probably Approximately Correct (PAC) model of learning. Learning Disjunctive Normal Form (DNF) expressions in the PAC model is widely considered to be the main open problem in Computational Learning Theory. We prove that PAC learning of DNF expressions by an algorithm that produces DNF expressions as its hypotheses is NP -hard. We show that the learning problem remains NP -hard even if the learning algorithm can ask membership queries. We also prove that with an additional restriction on the size of hypotheses the learning remains NP -hard even with respect to the uniform distribution. These last two negative results are the first for learning in the PAC model with membership queries that are not based on cryptographic assumptions. We complement the hardness results above by presenting a new algorithm for learning DNF expressions with respect to the uniform distribution using membership queries. Our algorithm is attribute-efficient; noise-tolerant, and uses membership queries in a non adaptive way. In terms of running time it substantially improves on the best previously known algorithm of Bshouty et al. Learning of parities with random noise with respect to the uniform distribution is a famous open problem in learning theory and is also equivalent to a major open problem in coding theory. We show that an efficient algorithm for this problem would imply efficient algorithms for several other key learning problems with respect to the uniform distribution. In particular, we show that agnostic learning of parities (also referred to as learning with adversarial noise) reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. , this gives the first non-trivial algorithm for agnostic learning of parities. This reduction also implies that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. A monomial is a conjunction of (possibly negated) Boolean variables and is one of the simplest and most fundamental concepts. We show that even weak agnostic learning of monomials by an algorithm that outputs a monomial is NP -hard, resolving a basic open problem in the model. The proposed solutions rely heavily on tools from computational complexity and yield solutions to a number of problems outside of learning theory. Our hardness results are based on developing novel reductions from interactive proof systems for NP and known NP -hard approximation problems. Reductions and learning algorithms with respect to the uniform distribution are based on new techniques for manipulating the Fourier Transform of a Boolean function.
Read “Efficiency and computational limitations of learning algorithms”:
Read “Efficiency and computational limitations of learning algorithms” by choosing from the options below.
Search for “Efficiency and computational limitations of learning algorithms” downloads:
Visit our Downloads Search page to see if downloads are available.
Find “Efficiency and computational limitations of learning algorithms” in Libraries Near You:
Read or borrow “Efficiency and computational limitations of learning algorithms” from your local library.
- The WorldCat Libraries Catalog: Find a copy of “Efficiency and computational limitations of learning algorithms” at a library near you.
Buy “Efficiency and computational limitations of learning algorithms” online:
Shop for “Efficiency and computational limitations of learning algorithms” on popular online marketplaces.
- Ebay: New and used books.