2
2.0
Jun 30, 2018
06/18
by
Qingqing Huang; Rong Ge; Sham Kakade; Munther Dahleh
texts
eye 2
favorite 0
comment 0
Consider a stationary discrete random process with alphabet size d, which is assumed to be the output process of an unknown stationary Hidden Markov Model (HMM). Given the joint probabilities of finite length strings of the process, we are interested in finding a finite state generative model to describe the entire process. In particular, we focus on two classes of models: HMMs and quasi-HMMs, which is a strictly larger class of models containing HMMs. In the main theorem, we show that if the...
Topics: Computing Research Repository, Learning
Source: http://arxiv.org/abs/1411.3698
6
6.0
Jun 29, 2018
06/18
by
Sham Kakade; Percy Liang; Vatsal Sharan; Gregory Valiant
texts
eye 6
favorite 0
comment 0
We consider the problem of predicting the next observation given a sequence of past observations. We show that for any distribution over observations, if the mutual information between past observations and future observations is upper bounded by $I$, then a simple Markov model over the most recent $I/\epsilon$ observations obtains expected KL error $\epsilon$--and hence $\ell_1$ error $\sqrt{\epsilon}$--with respect to the optimal predictor that has access to the entire past. For a Hidden...
Topics: Machine Learning, Artificial Intelligence, Computational Complexity, Statistics, Learning,...
Source: http://arxiv.org/abs/1612.02526
5
5.0
Jun 30, 2018
06/18
by
Aravind Rajeswaran; Kendall Lowrey; Emanuel Todorov; Sham Kakade
texts
eye 5
favorite 0
comment 0
This work shows that policies with simple linear and RBF parameterizations can be trained to solve a variety of continuous control tasks, including the OpenAI gym benchmarks. The performance of these trained policies are competitive with state of the art results, obtained with more elaborate parameterizations such as fully connected neural networks. Furthermore, existing training and testing scenarios are shown to be very limited and prone to over-fitting, thus giving rise to only...
Topics: Learning, Systems and Control, Artificial Intelligence, Computing Research Repository, Robotics
Source: http://arxiv.org/abs/1703.02660
47
47
Sep 21, 2013
09/13
by
Sham Kakade; Adam Tauman Kalai; Varun Kanade; Ohad Shamir
texts
eye 47
favorite 0
comment 0
Generalized Linear Models (GLMs) and Single Index Models (SIMs) provide powerful generalizations of linear regression, where the target variable is assumed to be a (possibly unknown) 1-dimensional function of a linear predictor. In general, these problems entail non-convex estimation procedures, and, in practice, iterative local search heuristics are often used. Kalai and Sastry (2009) recently provided the first provably efficient method for learning SIMs and GLMs, under the assumptions that...
Source: http://arxiv.org/abs/1104.2018v1
10
10.0
Jun 29, 2018
06/18
by
John Thickstun; Zaid Harchaoui; Sham Kakade
texts
eye 10
favorite 0
comment 0
This paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research. MusicNet consists of hundreds of freely-licensed classical music recordings by 10 composers, written for 11 instruments, together with instrument/note annotations resulting in over 1 million temporal labels on 34 hours of chamber music performances under various studio and microphone conditions. The paper defines a multi-label...
Topics: Machine Learning, Learning, Sound, Computing Research Repository, Statistics
Source: http://arxiv.org/abs/1611.09827
11
11
Jun 26, 2018
06/18
by
David Belanger; Sham Kakade
texts
eye 11
favorite 0
comment 0
Low dimensional representations of words allow accurate NLP models to be trained on limited annotated data. While most representations ignore words' local context, a natural way to induce context-dependent representations is to perform inference in a probabilistic latent-variable sequence model. Given the recent success of continuous vector space word representations, we provide such an inference procedure for continuous states, where words' representations are given by the posterior mean of a...
Topics: Machine Learning, Learning, Statistics, Computation and Language, Computing Research Repository
Source: http://arxiv.org/abs/1502.04081
85
85
Sep 17, 2013
09/13
by
Alex Strehl; John Langford; Sham Kakade; Lihong Li
texts
eye 85
favorite 0
comment 0
We provide a sound and consistent foundation for the use of \emph{nonrandom} exploration data in "contextual bandit" or "partially labeled" settings where only the value of a chosen action is learned. The primary challenge in a variety of settings is that the exploration policy, in which "offline" data is logged, is not explicitly known. Prior solutions here require either control of the actions during the learning process, recorded random exploration, or actions...
Source: http://arxiv.org/abs/1003.0120v2
48
48
Sep 23, 2013
09/13
by
Dean Foster; Sham Kakade; Ruslan Salakhutdinov
texts
eye 48
favorite 0
comment 0
We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might...
Source: http://arxiv.org/abs/1105.0857v1
10
10.0
Jun 27, 2018
06/18
by
Kamalika Chaudhuri; Sham Kakade; Praneeth Netrapalli; Sujay Sanghavi
texts
eye 10
favorite 0
comment 0
An active learner is given a class of models, a large set of unlabeled examples, and the ability to interactively query labels of a subset of these examples; the goal of the learner is to learn a model in the class that fits the data well. Previous theoretical work has rigorously characterized label complexity of active learning, but most of this work has focused on the PAC or the agnostic PAC model. In this paper, we shift our attention to a more general setting -- maximum likelihood...
Topics: Statistics, Computing Research Repository, Learning, Machine Learning
Source: http://arxiv.org/abs/1506.02348
40
40
Sep 23, 2013
09/13
by
Elad Hazan; Sham Kakade
texts
eye 40
favorite 0
comment 0
We show that the existence of a computationally efficient calibration algorithm, with a low weak calibration rate, would imply the existence of an efficient algorithm for computing approximate Nash equilibria - thus implying the unlikely conclusion that every problem in PPAD is solvable in polynomial time.
Source: http://arxiv.org/abs/1202.4478v1