id
stringlengths 9
16
| submitter
stringlengths 2
51
⌀ | title
stringlengths 5
243
| categories
stringlengths 5
69
| abstract
stringlengths 23
3.66k
| labels
stringlengths 5
184
| domain
stringclasses 9
values |
---|---|---|---|---|---|---|
1805.09294
|
Jan-Matthis Lueckmann
|
Likelihood-free inference with emulator networks
|
stat.ML cs.LG
|
Approximate Bayesian Computation (ABC) provides methods for Bayesian
inference in simulation-based stochastic models which do not permit tractable
likelihoods. We present a new ABC method which uses probabilistic neural
emulator networks to learn synthetic likelihoods on simulated data -- both
local emulators which approximate the likelihood for specific observed data, as
well as global ones which are applicable to a range of data. Simulations are
chosen adaptively using an acquisition function which takes into account
uncertainty about either the posterior distribution of interest, or the
parameters of the emulator. Our approach does not rely on user-defined
rejection thresholds or distance functions. We illustrate inference with
emulator networks on synthetic examples and on a biophysical neuron model, and
show that emulators allow accurate and efficient inference even on
high-dimensional problems which are challenging for conventional ABC
approaches.
|
Machine Learning, Machine Learning
|
Statistics
|
2201.12973
|
Alireza Doostan
|
GenMod: A generative modeling approach for spectral representation of
PDEs with random inputs
|
stat.ML cs.LG
|
We propose a method for quantifying uncertainty in high-dimensional PDE
systems with random parameters, where the number of solution evaluations is
small. Parametric PDE solutions are often approximated using a spectral
decomposition based on polynomial chaos expansions. For the class of systems we
consider (i.e., high dimensional with limited solution evaluations) the
coefficients are given by an underdetermined linear system in a regression
formulation. This implies additional assumptions, such as sparsity of the
coefficient vector, are needed to approximate the solution. Here, we present an
approach where we assume the coefficients are close to the range of a
generative model that maps from a low to a high dimensional space of
coefficients. Our approach is inspired be recent work examining how generative
models can be used for compressed sensing in systems with random Gaussian
measurement matrices. Using results from PDE theory on coefficient decay rates,
we construct an explicit generative model that predicts the polynomial chaos
coefficient magnitudes. The algorithm we developed to find the coefficients,
which we call GenMod, is composed of two main steps. First, we predict the
coefficient signs using Orthogonal Matching Pursuit. Then, we assume the
coefficients are within a sparse deviation from the range of a sign-adjusted
generative model. This allows us to find the coefficients by solving a
nonconvex optimization problem, over the input space of the generative model
and the space of sparse vectors. We obtain theoretical recovery results for a
Lipschitz continuous generative model and for a more specific generative model,
based on coefficient decay rate bounds. We examine three high-dimensional
problems and show that, for all three examples, the generative model approach
outperforms sparsity promoting methods at small sample sizes.
|
Machine Learning, Machine Learning
|
Statistics
|
1704.00794
|
Karl {\O}yvind Mikalsen
|
Time Series Cluster Kernel for Learning Similarities between
Multivariate Time Series with Missing Data
|
stat.ML cs.LG
|
Similarity-based approaches represent a promising direction for time series
analysis. However, many such methods rely on parameter tuning, and some have
shortcomings if the time series are multivariate (MTS), due to dependencies
between attributes, or the time series contain missing data. In this paper, we
address these challenges within the powerful context of kernel methods by
proposing the robust \emph{time series cluster kernel} (TCK). The approach
taken leverages the missing data handling properties of Gaussian mixture models
(GMM) augmented with informative prior distributions. An ensemble learning
approach is exploited to ensure robustness to parameters by combining the
clustering results of many GMM to form the final kernel.
We evaluate the TCK on synthetic and real data and compare to other
state-of-the-art techniques. The experimental results demonstrate that the TCK
is robust to parameter choices, provides competitive results for MTS without
missing data and outstanding results for missing data.
|
Machine Learning, Machine Learning
|
Statistics
|
2004.09646
|
Qing Zhou
|
Causal network learning with non-invertible functional relationships
|
stat.ML cs.LG
|
Discovery of causal relationships from observational data is an important
problem in many areas. Several recent results have established the
identifiability of causal DAGs with non-Gaussian and/or nonlinear structural
equation models (SEMs). In this paper, we focus on nonlinear SEMs defined by
non-invertible functions, which exist in many data domains, and propose a novel
test for non-invertible bivariate causal models. We further develop a method to
incorporate this test in structure learning of DAGs that contain both linear
and nonlinear causal relations. By extensive numerical comparisons, we show
that our algorithms outperform existing DAG learning methods in identifying
causal graphical structures. We illustrate the practical application of our
method in learning causal networks for combinatorial binding of transcription
factors from ChIP-Seq data.
|
Machine Learning, Machine Learning
|
Statistics
|
1511.00158
|
Raymundo Navarrete
|
Prediction of Dynamical time Series Using Kernel Based Regression and
Smooth Splines
|
stat.ML cs.LG
|
Prediction of dynamical time series with additive noise using support vector
machines or kernel based regression has been proved to be consistent for
certain classes of discrete dynamical systems. Consistency implies that these
methods are effective at computing the expected value of a point at a future
time given the present coordinates. However, the present coordinates themselves
are noisy, and therefore, these methods are not necessarily effective at
removing noise. In this article, we consider denoising and prediction as
separate problems for flows, as opposed to discrete time dynamical systems, and
show that the use of smooth splines is more effective at removing noise.
Combination of smooth splines and kernel based regression yields predictors
that are more accurate on benchmarks typically by a factor of 2 or more. We
prove that kernel based regression in combination with smooth splines converges
to the exact predictor for time series extracted from any compact invariant set
of any sufficiently smooth flow. As a consequence of convergence, one can find
examples where the combination of kernel based regression with smooth splines
is superior by even a factor of $100$. The predictors that we compute operate
on delay coordinate data and not the full state vector, which is typically not
observable.
|
Machine Learning, Machine Learning
|
Statistics
|
2210.16835
|
Mengmeng Wu
|
Variance reduced Shapley value estimation for trustworthy data valuation
|
stat.ML cs.LG
|
Data valuation, especially quantifying data value in algorithmic prediction
and decision-making, is a fundamental problem in data trading scenarios. The
most widely used method is to define the data Shapley and approximate it by
means of the permutation sampling algorithm. To make up for the large
estimation variance of the permutation sampling that hinders the development of
the data marketplace, we propose a more robust data valuation method using
stratified sampling, named variance reduced data Shapley (VRDS for short). We
theoretically show how to stratify, how many samples are taken at each stratum,
and the sample complexity analysis of VRDS. Finally, the effectiveness of VRDS
is illustrated in different types of datasets and data removal applications.
|
Machine Learning, Machine Learning
|
Statistics
|
2001.08049
|
Nicolas Brosse
|
On Last-Layer Algorithms for Classification: Decoupling Representation
from Uncertainty Estimation
|
stat.ML cs.LG
|
Uncertainty quantification for deep learning is a challenging open problem.
Bayesian statistics offer a mathematically grounded framework to reason about
uncertainties; however, approximate posteriors for modern neural networks still
require prohibitive computational costs. We propose a family of algorithms
which split the classification task into two stages: representation learning
and uncertainty estimation. We compare four specific instances, where
uncertainty estimation is performed via either an ensemble of Stochastic
Gradient Descent or Stochastic Gradient Langevin Dynamics snapshots, an
ensemble of bootstrapped logistic regressions, or via a number of Monte Carlo
Dropout passes. We evaluate their performance in terms of \emph{selective}
classification (risk-coverage), and their ability to detect out-of-distribution
samples. Our experiments suggest there is limited value in adding multiple
uncertainty layers to deep classifiers, and we observe that these simple
methods strongly outperform a vanilla point-estimate SGD in some complex
benchmarks like ImageNet.
|
Machine Learning, Machine Learning
|
Statistics
|
1910.12807
|
Kamil Ciosek
|
Better Exploration with Optimistic Actor-Critic
|
stat.ML cs.LG
|
Actor-critic methods, a type of model-free Reinforcement Learning, have been
successfully applied to challenging tasks in continuous control, often
achieving state-of-the art performance. However, wide-scale adoption of these
methods in real-world domains is made difficult by their poor sample
efficiency. We address this problem both theoretically and empirically. On the
theoretical side, we identify two phenomena preventing efficient exploration in
existing state-of-the-art algorithms such as Soft Actor Critic. First,
combining a greedy actor update with a pessimistic estimate of the critic leads
to the avoidance of actions that the agent does not know about, a phenomenon we
call pessimistic underexploration. Second, current algorithms are directionally
uninformed, sampling actions with equal probability in opposite directions from
the current mean. This is wasteful, since we typically need actions taken along
certain directions much more than others. To address both of these phenomena,
we introduce a new algorithm, Optimistic Actor Critic, which approximates a
lower and upper confidence bound on the state-action value function. This
allows us to apply the principle of optimism in the face of uncertainty to
perform directed exploration using the upper bound while still using the lower
bound to avoid overestimation. We evaluate OAC in several challenging
continuous control tasks, achieving state-of the art sample efficiency.
|
Machine Learning, Machine Learning
|
Statistics
|
1706.02899
|
Yanfei Zhang
|
Assessing the Performance of Deep Learning Algorithms for Newsvendor
Problem
|
stat.ML cs.LG
|
In retailer management, the Newsvendor problem has widely attracted attention
as one of basic inventory models. In the traditional approach to solving this
problem, it relies on the probability distribution of the demand. In theory, if
the probability distribution is known, the problem can be considered as fully
solved. However, in any real world scenario, it is almost impossible to even
approximate or estimate a better probability distribution for the demand. In
recent years, researchers start adopting machine learning approach to learn a
demand prediction model by using other feature information. In this paper, we
propose a supervised learning that optimizes the demand quantities for products
based on feature information. We demonstrate that the original Newsvendor loss
function as the training objective outperforms the recently suggested quadratic
loss function. The new algorithm has been assessed on both the synthetic data
and real-world data, demonstrating better performance.
|
Machine Learning, Machine Learning
|
Statistics
|
1805.10402
|
Xiran Zhou
|
Deep Convolutional Neural Networks for Map-Type Classification
|
stat.ML cs.LG
|
Maps are an important medium that enable people to comprehensively understand
the configuration of cultural activities and natural elements over different
times and places. Although massive maps are available in the digital era, how
to effectively and accurately access the required map remains a challenge
today. Previous works partially related to map-type classification mainly
focused on map comparison and map matching at the local scale. The features
derived from local map areas might be insufficient to characterize map content.
To facilitate establishing an automatic approach for accessing the needed map,
this paper reports our investigation into using deep learning techniques to
recognize seven types of map, including topographic map, terrain map, physical
map, urban scene map, the National Map, 3D map, nighttime map, orthophoto map,
and land cover classification map. Experimental results show that the
state-of-the-art deep convolutional neural networks can support automatic
map-type classification. Additionally, the classification accuracy varies
according to different map-types. We hope our work can contribute to the
implementation of deep learning techniques in cartographical community and
advance the progress of Geographical Artificial Intelligence (GeoAI).
|
Machine Learning, Machine Learning
|
Statistics
|
1409.3912
|
Takafumi Kanamori Dr.
|
Parallel Distributed Block Coordinate Descent Methods based on Pairwise
Comparison Oracle
|
stat.ML cs.LG
|
This paper provides a block coordinate descent algorithm to solve
unconstrained optimization problems. In our algorithm, computation of function
values or gradients is not required. Instead, pairwise comparison of function
values is used. Our algorithm consists of two steps; one is the direction
estimate step and the other is the search step. Both steps require only
pairwise comparison of function values, which tells us only the order of
function values over two points. In the direction estimate step, a Newton type
search direction is estimated. A computation method like block coordinate
descent methods is used with the pairwise comparison. In the search step, a
numerical solution is updated along the estimated direction. The computation in
the direction estimate step can be easily parallelized, and thus, the algorithm
works efficiently to find the minimizer of the objective function. Also, we
show an upper bound of the convergence rate. In numerical experiments, we show
that our method efficiently finds the optimal solution compared to some
existing methods based on the pairwise comparison.
|
Machine Learning, Machine Learning
|
Statistics
|
1801.08712
|
Atanas Mirchev
|
Classification of sparsely labeled spatio-temporal data through
semi-supervised adversarial learning
|
stat.ML cs.LG
|
In recent years, Generative Adversarial Networks (GAN) have emerged as a
powerful method for learning the mapping from noisy latent spaces to realistic
data samples in high-dimensional space. So far, the development and application
of GANs have been predominantly focused on spatial data such as images. In this
project, we aim at modeling of spatio-temporal sensor data instead, i.e.
dynamic data over time. The main goal is to encode temporal data into a global
and low-dimensional latent vector that captures the dynamics of the
spatio-temporal signal. To this end, we incorporate auto-regressive RNNs,
Wasserstein GAN loss, spectral norm weight constraints and a semi-supervised
learning scheme into InfoGAN, a method for retrieval of meaningful latents in
adversarial learning. To demonstrate the modeling capability of our method, we
encode full-body skeletal human motion from a large dataset representing 60
classes of daily activities, recorded in a multi-Kinect setup. Initial results
indicate competitive classification performance of the learned latent
representations, compared to direct CNN/RNN inference. In future work, we plan
to apply this method on a related problem in the medical domain, i.e. on
recovery of meaningful latents in gait analysis of patients with vertigo and
balance disorders.
|
Machine Learning, Machine Learning
|
Statistics
|
1706.02524
|
Hyunjik Kim
|
Scaling up the Automatic Statistician: Scalable Structure Discovery
using Gaussian Processes
|
stat.ML cs.LG
|
Automating statistical modelling is a challenging problem in artificial
intelligence. The Automatic Statistician takes a first step in this direction,
by employing a kernel search algorithm with Gaussian Processes (GP) to provide
interpretable statistical models for regression problems. However this does not
scale due to its $O(N^3)$ running time for the model selection. We propose
Scalable Kernel Composition (SKC), a scalable kernel search algorithm that
extends the Automatic Statistician to bigger data sets. In doing so, we derive
a cheap upper bound on the GP marginal likelihood that sandwiches the marginal
likelihood with the variational lower bound . We show that the upper bound is
significantly tighter than the lower bound and thus useful for model selection.
|
Machine Learning, Machine Learning
|
Statistics
|
2110.03995
|
Swagatam Das
|
Statistical Regeneration Guarantees of the Wasserstein Autoencoder with
Latent Space Consistency
|
stat.ML cs.LG
|
The introduction of Variational Autoencoders (VAE) has been marked as a
breakthrough in the history of representation learning models. Besides having
several accolades of its own, VAE has successfully flagged off a series of
inventions in the form of its immediate successors. Wasserstein Autoencoder
(WAE), being an heir to that realm carries with it all of the goodness and
heightened generative promises, matching even the generative adversarial
networks (GANs). Needless to say, recent years have witnessed a remarkable
resurgence in statistical analyses of the GANs. Similar examinations for
Autoencoders, however, despite their diverse applicability and notable
empirical performance, remain largely absent. To close this gap, in this paper,
we investigate the statistical properties of WAE. Firstly, we provide
statistical guarantees that WAE achieves the target distribution in the latent
space, utilizing the Vapnik Chervonenkis (VC) theory. The main result,
consequently ensures the regeneration of the input distribution, harnessing the
potential offered by Optimal Transport of measures under the Wasserstein
metric. This study, in turn, hints at the class of distributions WAE can
reconstruct after suffering a compression in the form of a latent law.
|
Machine Learning, Machine Learning
|
Statistics
|
2110.01072
|
Yining Wang
|
Active Learning for Contextual Search with Binary Feedbacks
|
stat.ML cs.LG
|
In this paper, we study the learning problem in contextual search, which is
motivated by applications such as first-price auction, personalized medicine
experiments, and feature-based pricing experiments. In particular, for a
sequence of arriving context vectors, with each context associated with an
underlying value, the decision-maker either makes a query at a certain point or
skips the context. The decision-maker will only observe the binary feedback on
the relationship between the query point and the value associated with the
context. We study a PAC learning setting, where the goal is to learn the
underlying mean value function in context with a minimum number of queries. To
address this challenge, we propose a tri-section search approach combined with
a margin-based active learning method. We show that the algorithm only needs to
make $O(1/\varepsilon^2)$ queries to achieve an $\epsilon$-estimation accuracy.
This sample complexity significantly reduces the required sample complexity in
the passive setting, at least $\Omega(1/\varepsilon^4)$.
|
Machine Learning, Machine Learning
|
Statistics
|
2303.01353
|
Etienne Boursier
|
Penalising the biases in norm regularisation enforces sparsity
|
stat.ML cs.LG
|
Controlling the parameters' norm often yields good generalisation when
training neural networks. Beyond simple intuitions, the relation between
regularising parameters' norm and obtained estimators remains theoretically
misunderstood. For one hidden ReLU layer networks with unidimensional data,
this work shows the parameters' norm required to represent a function is given
by the total variation of its second derivative, weighted by a $\sqrt{1+x^2}$
factor. Notably, this weighting factor disappears when the norm of bias terms
is not regularised. The presence of this additional weighting factor is of
utmost significance as it is shown to enforce the uniqueness and sparsity (in
the number of kinks) of the minimal norm interpolator. Conversely, omitting the
bias' norm allows for non-sparse solutions. Penalising the bias terms in the
regularisation, either explicitly or implicitly, thus leads to sparse
estimators.
|
Machine Learning, Machine Learning
|
Statistics
|
2205.14627
|
Silvia Sciutto
|
Continuous Generative Neural Networks
|
stat.ML cs.LG
|
In this work, we present and study Continuous Generative Neural Networks
(CGNNs), namely, generative models in the continuous setting: the output of a
CGNN belongs to an infinite-dimensional function space. The architecture is
inspired by DCGAN, with one fully connected layer, several convolutional layers
and nonlinear activation functions. In the continuous $L^2$ setting, the
dimensions of the spaces of each layer are replaced by the scales of a
multiresolution analysis of a compactly supported wavelet. We present
conditions on the convolutional filters and on the nonlinearity that guarantee
that a CGNN is injective. This theory finds applications to inverse problems,
and allows for deriving Lipschitz stability estimates for (possibly nonlinear)
infinite-dimensional inverse problems with unknowns belonging to the manifold
generated by a CGNN. Several numerical simulations, including signal
deblurring, illustrate and validate this approach.
|
Machine Learning, Machine Learning
|
Statistics
|
1803.09153
|
Niko Br\"ummer
|
Fast variational Bayes for heavy-tailed PLDA applied to i-vectors and
x-vectors
|
stat.ML cs.LG
|
The standard state-of-the-art backend for text-independent speaker
recognizers that use i-vectors or x-vectors, is Gaussian PLDA (G-PLDA),
assisted by a Gaussianization step involving length normalization. G-PLDA can
be trained with both generative or discriminative methods. It has long been
known that heavy-tailed PLDA (HT-PLDA), applied without length normalization,
gives similar accuracy, but at considerable extra computational cost. We have
recently introduced a fast scoring algorithm for a discriminatively trained
HT-PLDA backend. This paper extends that work by introducing a fast,
variational Bayes, generative training algorithm. We compare old and new
backends, with and without length-normalization, with i-vectors and x-vectors,
on SRE'10, SRE'16 and SITW.
|
Machine Learning, Machine Learning
|
Statistics
|
1906.12179
|
Dominik Janzing
|
Causal Regularization
|
stat.ML cs.LG
|
I argue that regularizing terms in standard regression methods not only help
against overfitting finite data, but sometimes also yield better causal models
in the infinite sample regime. I first consider a multi-dimensional variable
linearly influencing a target variable with some multi-dimensional unobserved
common cause, where the confounding effect can be decreased by keeping the
penalizing term in Ridge and Lasso regression even in the population limit.
Choosing the size of the penalizing term, is however challenging, because cross
validation is pointless. Here it is done by first estimating the strength of
confounding via a method proposed earlier, which yielded some reasonable
results for simulated and real data.
Further, I prove a `causal generalization bound' which states (subject to a
particular model of confounding) that the error made by interpreting any
non-linear regression as causal model can be bounded from above whenever
functions are taken from a not too rich class. In other words, the bound
guarantees "generalization" from observational to interventional distributions,
which is usually not subject of statistical learning theory (and is only
possible due to the underlying symmetries of the confounder model).
|
Machine Learning, Machine Learning
|
Statistics
|
2002.12640
|
Meyer Scetbon
|
A Spectral Analysis of Dot-product Kernels
|
stat.ML cs.LG
|
We present eigenvalue decay estimates of integral operators associated with
compositional dot-product kernels. The estimates improve on previous ones
established for power series kernels on spheres. This allows us to obtain the
volumes of balls in the corresponding reproducing kernel Hilbert spaces. We
discuss the consequences on statistical estimation with compositional dot
product kernels and highlight interesting trade-offs between the approximation
error and the statistical error depending on the number of compositions and the
smoothness of the kernels.
|
Machine Learning, Machine Learning
|
Statistics
|
2103.02667
|
Martin Arjovsky
|
Out of Distribution Generalization in Machine Learning
|
stat.ML cs.LG
|
Machine learning has achieved tremendous success in a variety of domains in
recent years. However, a lot of these success stories have been in places where
the training and the testing distributions are extremely similar to each other.
In everyday situations when models are tested in slightly different data than
they were trained on, ML algorithms can fail spectacularly. This research
attempts to formally define this problem, what sets of assumptions are
reasonable to make in our data and what kind of guarantees we hope to obtain
from them. Then, we focus on a certain class of out of distribution problems,
their assumptions, and introduce simple algorithms that follow from these
assumptions that are able to provide more reliable generalization. A central
topic in the thesis is the strong link between discovering the causal structure
of the data, finding features that are reliable (when using them to predict)
regardless of their context, and out of distribution generalization.
|
Machine Learning, Machine Learning
|
Statistics
|
1903.00863
|
Kelvin Hsu
|
Bayesian Learning of Conditional Kernel Mean Embeddings for Automatic
Likelihood-Free Inference
|
stat.ML cs.LG
|
In likelihood-free settings where likelihood evaluations are intractable,
approximate Bayesian computation (ABC) addresses the formidable inference task
to discover plausible parameters of simulation programs that explain the
observations. However, they demand large quantities of simulation calls.
Critically, hyperparameters that determine measures of simulation discrepancy
crucially balance inference accuracy and sample efficiency, yet are difficult
to tune. In this paper, we present kernel embedding likelihood-free inference
(KELFI), a holistic framework that automatically learns model hyperparameters
to improve inference accuracy given limited simulation budget. By leveraging
likelihood smoothness with conditional mean embeddings, we nonparametrically
approximate likelihoods and posteriors as surrogate densities and sample from
closed-form posterior mean embeddings, whose hyperparameters are learned under
its approximate marginal likelihood. Our modular framework demonstrates
improved accuracy and efficiency on challenging inference problems in ecology.
|
Machine Learning, Machine Learning
|
Statistics
|
1911.06253
|
Michael Perlmutter
|
Understanding Graph Neural Networks with Generalized Geometric
Scattering Transforms
|
stat.ML cs.LG
|
The scattering transform is a multilayered wavelet-based deep learning
architecture that acts as a model of convolutional neural networks. Recently,
several works have introduced generalizations of the scattering transform for
non-Euclidean settings such as graphs. Our work builds upon these constructions
by introducing windowed and non-windowed geometric scattering transforms for
graphs based upon a very general class of asymmetric wavelets. We show that
these asymmetric graph scattering transforms have many of the same theoretical
guarantees as their symmetric counterparts. As a result, the proposed
construction unifies and extends known theoretical results for many of the
existing graph scattering architectures. In doing so, this work helps bridge
the gap between geometric scattering and other graph neural networks by
introducing a large family of networks with provable stability and invariance
guarantees. These results lay the groundwork for future deep learning
architectures for graph-structured data that have learned filters and also
provably have desirable theoretical properties.
|
Machine Learning, Machine Learning
|
Statistics
|
1403.0388
|
Mohammadzaman Zamani
|
Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm
|
stat.ML cs.LG
|
With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
|
Machine Learning, Machine Learning
|
Statistics
|
2209.15466
|
Mathieu Blondel
|
Sparsity-Constrained Optimal Transport
|
stat.ML cs.LG
|
Regularized optimal transport (OT) is now increasingly used as a loss or as a
matching layer in neural networks. Entropy-regularized OT can be computed using
the Sinkhorn algorithm but it leads to fully-dense transportation plans,
meaning that all sources are (fractionally) matched with all targets. To
address this issue, several works have investigated quadratic regularization
instead. This regularization preserves sparsity and leads to unconstrained and
smooth (semi) dual objectives, that can be solved with off-the-shelf gradient
methods. Unfortunately, quadratic regularization does not give direct control
over the cardinality (number of nonzeros) of the transportation plan. We
propose in this paper a new approach for OT with explicit cardinality
constraints on the transportation plan. Our work is motivated by an application
to sparse mixture of experts, where OT can be used to match input tokens such
as image patches with expert models such as neural networks. Cardinality
constraints ensure that at most $k$ tokens are matched with an expert, which is
crucial for computational performance reasons. Despite the nonconvexity of
cardinality constraints, we show that the corresponding (semi) dual problems
are tractable and can be solved with first-order gradient methods. Our method
can be thought as a middle ground between unregularized OT (recovered in the
limit case $k=1$) and quadratically-regularized OT (recovered when $k$ is large
enough). The smoothness of the objectives increases as $k$ increases, giving
rise to a trade-off between convergence speed and sparsity of the optimal plan.
|
Machine Learning, Machine Learning
|
Statistics
|
1911.06646
|
David Cortes
|
Imputing missing values with unsupervised random trees
|
stat.ML cs.LG
|
This work proposes a non-iterative strategy for missing value imputations
which is guided by similarity between observations, but instead of explicitly
determining distances or nearest neighbors, it assigns observations to
overlapping buckets through recursive semi-random hyperplane cuts, in which
weighted averages are determined as imputations for each variable. The quality
of these imputations is oftentimes not as good as that of chained equations,
but the proposed technique is much faster, non-iterative, can make imputations
on new data without re-calculating anything, and scales easily to large and
high-dimensional datasets, providing a significant boost over simple
mean/median imputation in regression and classification metrics with imputed
values when other methods are not feasible.
|
Machine Learning, Machine Learning
|
Statistics
|
2306.05857
|
Qiaozhe Zhang
|
How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint
|
stat.ML cs.LG
|
Network pruning is an effective measure to alleviate the storage and
computational burden of deep neural networks arising from its high
overparameterization. Thus raises a fundamental question: How sparse can we
prune a deep network without sacrifice on the performance? To address this
problem, in this work we'll take a first principles approach, i.e. we directly
impose the sparsity constraint on the original loss function and then
characterize the necessary and sufficient condition of the sparsity
(\textit{which turns out to nearly coincide}) by leveraging the notion of
\textit{statistical dimension} in convex geometry. Through this fundamental
limit, we're able to identify two key factors that determine the pruning ratio
limit, i.e., weight magnitude and network flatness. Generally speaking, the
flatter the loss landscape or the smaller the weight magnitude, the smaller
pruning ratio. In addition, we provide efficient countermeasures to address the
challenges in computing the pruning limit, which involves accurate spectrum
estimation of a large-scale and non-positive Hessian matrix. Moreover, through
the lens of the pruning ratio threshold, we can provide rigorous
interpretations on several heuristics in existing pruning algorithms. Extensive
experiments are performed that demonstrate that the our theoretical pruning
ratio threshold coincides very well with the experiments. All codes are
available at: https://github.com/QiaozheZhang/Global-One-shot-Pruning
|
Machine Learning, Machine Learning
|
Statistics
|
2103.14755
|
Robert Hu
|
Survival Regression with Proper Scoring Rules and Monotonic Neural
Networks
|
stat.ML cs.LG
|
We consider frequently used scoring rules for right-censored survival
regression models such as time-dependent concordance, survival-CRPS, integrated
Brier score and integrated binomial log-likelihood, and prove that neither of
them is a proper scoring rule. This means that the true survival distribution
may be scored worse than incorrect distributions, leading to inaccurate
estimation. We prove that, in contrast to these scores, the right-censored
log-likelihood is a proper scoring rule, i.e., the highest expected score is
achieved by the true distribution. Despite this, modern feed-forward
neural-network-based survival regression models are unable to train and
validate directly on the right-censored log-likelihood, due to its
intractability, and resort to the aforementioned alternatives, i.e., non-proper
scoring rules. We therefore propose a simple novel survival regression method
capable of directly optimizing log-likelihood using a monotonic restriction on
the time-dependent weights, coined SurvivalMonotonic-net (SuMo-net). SuMo-net
achieves state-of-the-art log-likelihood scores across several datasets with
20--100$\times$ computational speedup on inference over existing
state-of-the-art neural methods, and is readily applicable to datasets with
several million observations.
|
Machine Learning, Machine Learning
|
Statistics
|
1005.0437
|
Marius Kloft
|
A Unifying View of Multiple Kernel Learning
|
stat.ML cs.LG
|
Recent research on multiple kernel learning has lead to a number of
approaches for combining kernels in regularized risk minimization. The proposed
approaches include different formulations of objectives and varying
regularization strategies. In this paper we present a unifying general
optimization criterion for multiple kernel learning and show how existing
formulations are subsumed as special cases. We also derive the criterion's dual
representation, which is suitable for general smooth optimization algorithms.
Finally, we evaluate multiple kernel learning in this framework analytically
using a Rademacher complexity bound on the generalization error and empirically
in a set of experiments.
|
Machine Learning, Machine Learning
|
Statistics
|
2101.09258
|
Yang Song
|
Maximum Likelihood Training of Score-Based Diffusion Models
|
stat.ML cs.LG
|
Score-based diffusion models synthesize samples by reversing a stochastic
process that diffuses data to noise, and are trained by minimizing a weighted
combination of score matching losses. The log-likelihood of score-based
diffusion models can be tractably computed through a connection to continuous
normalizing flows, but log-likelihood is not directly optimized by the weighted
combination of score matching losses. We show that for a specific weighting
scheme, the objective upper bounds the negative log-likelihood, thus enabling
approximate maximum likelihood training of score-based diffusion models. We
empirically observe that maximum likelihood training consistently improves the
likelihood of score-based diffusion models across multiple datasets, stochastic
processes, and model architectures. Our best models achieve negative
log-likelihoods of 2.83 and 3.76 bits/dim on CIFAR-10 and ImageNet 32x32
without any data augmentation, on a par with state-of-the-art autoregressive
models on these tasks.
|
Machine Learning, Machine Learning
|
Statistics
|
2202.10066
|
Gr\'egoire Pacreau
|
Multi-task Representation Learning with Stochastic Linear Bandits
|
stat.ML cs.LG
|
We study the problem of transfer-learning in the setting of stochastic linear
bandit tasks. We consider that a low dimensional linear representation is
shared across the tasks, and study the benefit of learning this representation
in the multi-task learning setting. Following recent results to design
stochastic bandit policies, we propose an efficient greedy policy based on
trace norm regularization. It implicitly learns a low dimensional
representation by encouraging the matrix formed by the task regression vectors
to be of low rank. Unlike previous work in the literature, our policy does not
need to know the rank of the underlying matrix. We derive an upper bound on the
multi-task regret of our policy, which is, up to logarithmic factors, of order
$\sqrt{NdT(T+d)r}$, where $T$ is the number of tasks, $r$ the rank, $d$ the
number of variables and $N$ the number of rounds per task. We show the benefit
of our strategy compared to the baseline $Td\sqrt{N}$ obtained by solving each
task independently. We also provide a lower bound to the multi-task regret.
Finally, we corroborate our theoretical findings with preliminary experiments
on synthetic data.
|
Machine Learning, Machine Learning
|
Statistics
|
1611.06652
|
Brian McWilliams
|
Scalable Adaptive Stochastic Optimization Using Random Projections
|
stat.ML cs.LG
|
Adaptive stochastic gradient methods such as AdaGrad have gained popularity
in particular for training deep neural networks. The most commonly used and
studied variant maintains a diagonal matrix approximation to second order
information by accumulating past gradients which are used to tune the step size
adaptively. In certain situations the full-matrix variant of AdaGrad is
expected to attain better performance, however in high dimensions it is
computationally impractical. We present Ada-LR and RadaGrad two computationally
efficient approximations to full-matrix AdaGrad based on randomized
dimensionality reduction. They are able to capture dependencies between
features and achieve similar performance to full-matrix AdaGrad but at a much
smaller computational cost. We show that the regret of Ada-LR is close to the
regret of full-matrix AdaGrad which can have an up-to exponentially smaller
dependence on the dimension than the diagonal variant. Empirically, we show
that Ada-LR and RadaGrad perform similarly to full-matrix AdaGrad. On the task
of training convolutional neural networks as well as recurrent neural networks,
RadaGrad achieves faster convergence than diagonal AdaGrad.
|
Machine Learning, Machine Learning
|
Statistics
|
2310.16320
|
Ziyi Wang
|
Enhancing Low-Precision Sampling via Stochastic Gradient Hamiltonian
Monte Carlo
|
stat.ML cs.LG
|
Low-precision training has emerged as a promising low-cost technique to
enhance the training efficiency of deep neural networks without sacrificing
much accuracy. Its Bayesian counterpart can further provide uncertainty
quantification and improved generalization accuracy. This paper investigates
low-precision sampling via Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)
with low-precision and full-precision gradient accumulators for both strongly
log-concave and non-log-concave distributions. Theoretically, our results show
that, to achieve $\epsilon$-error in the 2-Wasserstein distance for
non-log-concave distributions, low-precision SGHMC achieves quadratic
improvement
($\widetilde{\mathbf{O}}\left({\epsilon^{-2}{\mu^*}^{-2}\log^2\left({\epsilon^{-1}}\right)}\right)$)
compared to the state-of-the-art low-precision sampler, Stochastic Gradient
Langevin Dynamics (SGLD)
($\widetilde{\mathbf{O}}\left({{\epsilon}^{-4}{\lambda^{*}}^{-1}\log^5\left({\epsilon^{-1}}\right)}\right)$).
Moreover, we prove that low-precision SGHMC is more robust to the quantization
error compared to low-precision SGLD due to the robustness of the
momentum-based update w.r.t. gradient noise. Empirically, we conduct
experiments on synthetic data, and {MNIST, CIFAR-10 \& CIFAR-100} datasets,
which validate our theoretical findings. Our study highlights the potential of
low-precision SGHMC as an efficient and accurate sampling method for
large-scale and resource-limited machine learning.
|
Machine Learning, Machine Learning
|
Statistics
|
2202.10638
|
Alexander Immer
|
Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations
|
stat.ML cs.LG
|
Data augmentation is commonly applied to improve performance of deep learning
by enforcing the knowledge that certain transformations on the input preserve
the output. Currently, the data augmentation parameters are chosen by human
effort and costly cross-validation, which makes it cumbersome to apply to new
datasets. We develop a convenient gradient-based method for selecting the data
augmentation without validation data during training of a deep neural network.
Our approach relies on phrasing data augmentation as an invariance in the prior
distribution on the functions of a neural network, which allows us to learn it
using Bayesian model selection. This has been shown to work in Gaussian
processes, but not yet for deep neural networks. We propose a differentiable
Kronecker-factored Laplace approximation to the marginal likelihood as our
objective, which can be optimised without human supervision or validation data.
We show that our method can successfully recover invariances present in the
data, and that this improves generalisation and data efficiency on image
datasets.
|
Machine Learning, Machine Learning
|
Statistics
|
2301.11697
|
Zhoufan Zhu
|
Big portfolio selection by graph-based conditional moments method
|
stat.ML cs.LG
|
How to do big portfolio selection is very important but challenging for both
researchers and practitioners. In this paper, we propose a new graph-based
conditional moments (GRACE) method to do portfolio selection based on thousands
of stocks or more. The GRACE method first learns the conditional quantiles and
mean of stock returns via a factor-augmented temporal graph convolutional
network, which guides the learning procedure through a factor-hypergraph built
by the set of stock-to-stock relations from the domain knowledge as well as the
set of factor-to-stock relations from the asset pricing knowledge. Next, the
GRACE method learns the conditional variance, skewness, and kurtosis of stock
returns from the learned conditional quantiles by using the quantiled
conditional moment (QCM) method. The QCM method is a supervised learning
procedure to learn these conditional higher-order moments, so it largely
overcomes the computational difficulty from the classical high-dimensional
GARCH-type methods. Moreover, the QCM method allows the mis-specification in
modeling conditional quantiles to some extent, due to its regression-based
nature. Finally, the GRACE method uses the learned conditional mean, variance,
skewness, and kurtosis to construct several performance measures, which are
criteria to sort the stocks to proceed the portfolio selection in the
well-known 10-decile framework. An application to NASDAQ and NYSE stock markets
shows that the GRACE method performs much better than its competitors,
particularly when the performance measures are comprised of conditional
variance, skewness, and kurtosis.
|
Machine Learning, Machine Learning
|
Statistics
|
1601.04530
|
Robert Duin
|
Domain based classification
|
stat.ML cs.LG
|
The majority of traditional classification ru les minimizing the expected
probability of error (0-1 loss) are inappropriate if the class probability
distributions are ill-defined or impossible to estimate. We argue that in such
cases class domains should be used instead of class distributions or densities
to construct a reliable decision function. Proposals are presented for some
evaluation criteria and classifier learning schemes, illustrated by an example.
|
Machine Learning, Machine Learning
|
Statistics
|
2002.03875
|
Jayaraman J. Thiagarajan
|
Calibrate and Prune: Improving Reliability of Lottery Tickets Through
Prediction Calibration
|
stat.ML cs.LG
|
The hypothesis that sub-network initializations (lottery) exist within the
initializations of over-parameterized networks, which when trained in isolation
produce highly generalizable models, has led to crucial insights into network
initialization and has enabled efficient inferencing. Supervised models with
uncalibrated confidences tend to be overconfident even when making wrong
prediction. In this paper, for the first time, we study how explicit confidence
calibration in the over-parameterized network impacts the quality of the
resulting lottery tickets. More specifically, we incorporate a suite of
calibration strategies, ranging from mixup regularization, variance-weighted
confidence calibration to the newly proposed likelihood-based calibration and
normalized bin assignment strategies. Furthermore, we explore different
combinations of architectures and datasets, and make a number of key findings
about the role of confidence calibration. Our empirical studies reveal that
including calibration mechanisms consistently lead to more effective lottery
tickets, in terms of accuracy as well as empirical calibration metrics, even
when retrained using data with challenging distribution shifts with respect to
the source dataset.
|
Machine Learning, Machine Learning
|
Statistics
|
1806.04819
|
Richard Nock
|
Integral Privacy for Sampling
|
stat.ML cs.LG
|
Differential privacy is a leading protection setting, focused by design on
individual privacy. Many applications, in medical / pharmaceutical domains or
social networks, rather posit privacy at a group level, a setting we call
integral privacy. We aim for the strongest form of privacy: the group size is
in particular not known in advance. We study a problem with related
applications in domains cited above that have recently met with substantial
recent press: sampling.
Keeping correct utility levels in such a strong model of statistical
indistinguishability looks difficult to be achieved with the usual differential
privacy toolbox because it would typically scale in the worst case the
sensitivity by the sample size and so the noise variance by up to its square.
We introduce a trick specific to sampling that bypasses the sensitivity
analysis. Privacy enforces an information theoretic barrier on approximation,
and we show how to reach this barrier with guarantees on the approximation of
the target non private density. We do so using a recent approach to non private
density estimation relying on the original boosting theory, learning the
sufficient statistics of an exponential family with classifiers. Approximation
guarantees cover the mode capture problem. In the context of learning, the
sampling problem is particularly important: because integral privacy enjoys the
same closure under post-processing as differential privacy does, any algorithm
using integrally privacy sampled data would result in an output equally
integrally private. We also show that this brings fairness guarantees on
post-processing that would eventually elude classical differential privacy: any
decision process has bounded data-dependent bias when the data is integrally
privately sampled. Experimental results against private kernel density
estimation and private GANs displays the quality of our results.
|
Machine Learning, Machine Learning
|
Statistics
|
2011.02147
|
Chun-Na Li
|
Capped norm linear discriminant analysis and its applications
|
stat.ML cs.LG
|
Classical linear discriminant analysis (LDA) is based on squared Frobenious
norm and hence is sensitive to outliers and noise. To improve the robustness of
LDA, in this paper, we introduce capped l_{2,1}-norm of a matrix, which employs
non-squared l_2-norm and "capped" operation, and further propose a novel capped
l_{2,1}-norm linear discriminant analysis, called CLDA. Due to the use of
capped l_{2,1}-norm, CLDA can effectively remove extreme outliers and suppress
the effect of noise data. In fact, CLDA can be also viewed as a weighted LDA.
CLDA is solved through a series of generalized eigenvalue problems with
theoretical convergency. The experimental results on an artificial data set,
some UCI data sets and two image data sets demonstrate the effectiveness of
CLDA.
|
Machine Learning, Machine Learning
|
Statistics
|
2002.09438
|
Guang Cheng
|
Online Batch Decision-Making with High-Dimensional Covariates
|
stat.ML cs.LG
|
We propose and investigate a class of new algorithms for sequential decision
making that interacts with \textit{a batch of users} simultaneously instead of
\textit{a user} at each decision epoch. This type of batch models is motivated
by interactive marketing and clinical trial, where a group of people are
treated simultaneously and the outcomes of the whole group are collected before
the next stage of decision. In such a scenario, our goal is to allocate a batch
of treatments to maximize treatment efficacy based on observed high-dimensional
user covariates. We deliver a solution, named \textit{Teamwork LASSO Bandit
algorithm}, that resolves a batch version of explore-exploit dilemma via
switching between teamwork stage and selfish stage during the whole decision
process. This is made possible based on statistical properties of LASSO
estimate of treatment efficacy that adapts to a sequence of batch observations.
In general, a rate of optimal allocation condition is proposed to delineate the
exploration and exploitation trade-off on the data collection scheme, which is
sufficient for LASSO to identify the optimal treatment for observed user
covariates. An upper bound on expected cumulative regret of the proposed
algorithm is provided.
|
Machine Learning, Machine Learning
|
Statistics
|
2108.08752
|
Dai Feng
|
A Framework for an Assessment of the Kernel-target Alignment in Tree
Ensemble Kernel Learning
|
stat.ML cs.LG
|
Kernels ensuing from tree ensembles such as random forest (RF) or gradient
boosted trees (GBT), when used for kernel learning, have been shown to be
competitive to their respective tree ensembles (particularly in higher
dimensional scenarios). On the other hand, it has been also shown that
performance of the kernel algorithms depends on the degree of the kernel-target
alignment. However, the kernel-target alignment for kernel learning based on
the tree ensembles has not been investigated and filling this gap is the main
goal of our work.
Using the eigenanalysis of the kernel matrix, we demonstrate that for
continuous targets good performance of the tree-based kernel learning is
associated with strong kernel-target alignment. Moreover, we show that well
performing tree ensemble based kernels are characterized by strong target
aligned components that are expressed through scalar products between the
eigenvectors of the kernel matrix and the target. This suggests that when tree
ensemble based kernel learning is successful, relevant information for the
supervised problem is concentrated near lower dimensional manifold spanned by
the target aligned components. Persistence of the strong target aligned
components in tree ensemble based kernels is further supported by sensitivity
analysis via landmark learning. In addition to a comprehensive simulation
study, we also provide experimental results from several real life data sets
that are in line with the simulations.
|
Machine Learning, Machine Learning
|
Statistics
|
2203.16662
|
Christopher Beckham
|
Overcoming challenges in leveraging GANs for few-shot data augmentation
|
stat.ML cs.LG
|
In this paper, we explore the use of GAN-based few-shot data augmentation as
a method to improve few-shot classification performance. We perform an
exploration into how a GAN can be fine-tuned for such a task (one of which is
in a class-incremental manner), as well as a rigorous empirical investigation
into how well these models can perform to improve few-shot classification. We
identify issues related to the difficulty of training such generative models
under a purely supervised regime with very few examples, as well as issues
regarding the evaluation protocols of existing works. We also find that in this
regime, classification accuracy is highly sensitive to how the classes of the
dataset are randomly split. Therefore, we propose a semi-supervised fine-tuning
approach as a more pragmatic way forward to address these problems.
|
Machine Learning, Machine Learning
|
Statistics
|
1807.05748
|
Cagatay Yildiz
|
Learning Stochastic Differential Equations With Gaussian Processes
Without Gradient Matching
|
stat.ML cs.LG
|
We introduce a novel paradigm for learning non-parametric drift and diffusion
functions for stochastic differential equation (SDE). The proposed model learns
to simulate path distributions that match observations with non-uniform time
increments and arbitrary sparseness, which is in contrast with gradient
matching that does not optimize simulated responses. We formulate sensitivity
equations for learning and demonstrate that our general stochastic distribution
optimisation leads to robust and efficient learning of SDE systems.
|
Machine Learning, Machine Learning
|
Statistics
|
2010.08529
|
Tianyi Yao
|
Feature Selection for Huge Data via Minipatch Learning
|
stat.ML cs.LG
|
Feature selection often leads to increased model interpretability, faster
computation, and improved model performance by discarding irrelevant or
redundant features. While feature selection is a well-studied problem with many
widely-used techniques, there are typically two key challenges: i) many
existing approaches become computationally intractable in huge-data settings
with millions of observations and features; and ii) the statistical accuracy of
selected features degrades in high-noise, high-correlation settings, thus
hindering reliable model interpretation. We tackle these problems by proposing
Stable Minipatch Selection (STAMPS) and Adaptive STAMPS (AdaSTAMPS). These are
meta-algorithms that build ensembles of selection events of base feature
selectors trained on many tiny, (adaptively-chosen) random subsets of both the
observations and features of the data, which we call minipatches. Our
approaches are general and can be employed with a variety of existing feature
selection strategies and machine learning techniques. In addition, we provide
theoretical insights on STAMPS and empirically demonstrate that our approaches,
especially AdaSTAMPS, dominate competing methods in terms of feature selection
accuracy and computational time.
|
Machine Learning, Machine Learning
|
Statistics
|
1903.05594
|
Daniele Calandriello
|
Gaussian Process Optimization with Adaptive Sketching: Scalable and No
Regret
|
stat.ML cs.LG
|
Gaussian processes (GP) are a well studied Bayesian approach for the
optimization of black-box functions. Despite their effectiveness in simple
problems, GP-based algorithms hardly scale to high-dimensional functions, as
their per-iteration time and space cost is at least quadratic in the number of
dimensions $d$ and iterations $t$. Given a set of $A$ alternatives to choose
from, the overall runtime $O(t^3A)$ is prohibitive. In this paper we introduce
BKB (budgeted kernelized bandit), a new approximate GP algorithm for
optimization under bandit feedback that achieves near-optimal regret (and hence
near-optimal convergence rate) with near-constant per-iteration complexity and
remarkably no assumption on the input space or covariance of the GP.
We combine a kernelized linear bandit algorithm (GP-UCB) with randomized
matrix sketching based on leverage score sampling, and we prove that randomly
sampling inducing points based on their posterior variance gives an accurate
low-rank approximation of the GP, preserving variance estimates and confidence
intervals. As a consequence, BKB does not suffer from variance starvation, an
important problem faced by many previous sparse GP approximations. Moreover, we
show that our procedure selects at most $\tilde{O}(d_{eff})$ points, where
$d_{eff}$ is the effective dimension of the explored space, which is typically
much smaller than both $d$ and $t$. This greatly reduces the dimensionality of
the problem, thus leading to a $O(TAd_{eff}^2)$ runtime and $O(A d_{eff})$
space complexity.
|
Machine Learning, Machine Learning
|
Statistics
|
2109.09417
|
David Burt
|
Barely Biased Learning for Gaussian Process Regression
|
stat.ML cs.LG
|
Recent work in scalable approximate Gaussian process regression has discussed
a bias-variance-computation trade-off when estimating the log marginal
likelihood. We suggest a method that adaptively selects the amount of
computation to use when estimating the log marginal likelihood so that the bias
of the objective function is guaranteed to be small. While simple in principle,
our current implementation of the method is not competitive computationally
with existing approximations.
|
Machine Learning, Machine Learning
|
Statistics
|
2108.00781
|
Liam Hodgkinson
|
Generalization Bounds using Lower Tail Exponents in Stochastic
Optimizers
|
stat.ML cs.LG
|
Despite the ubiquitous use of stochastic optimization algorithms in machine
learning, the precise impact of these algorithms and their dynamics on
generalization performance in realistic non-convex settings is still poorly
understood. While recent work has revealed connections between generalization
and heavy-tailed behavior in stochastic optimization, this work mainly relied
on continuous-time approximations; and a rigorous treatment for the original
discrete-time iterations is yet to be performed. To bridge this gap, we present
novel bounds linking generalization to the lower tail exponent of the
transition kernel associated with the optimizer around a local minimum, in both
discrete- and continuous-time settings. To achieve this, we first prove a data-
and algorithm-dependent generalization bound in terms of the celebrated
Fernique-Talagrand functional applied to the trajectory of the optimizer. Then,
we specialize this result by exploiting the Markovian structure of stochastic
optimizers, and derive bounds in terms of their (data-dependent) transition
kernels. We support our theory with empirical results from a variety of neural
networks, showing correlations between generalization error and lower tail
exponents.
|
Machine Learning, Machine Learning
|
Statistics
|
2104.15046
|
Simen Eide
|
Dynamic Slate Recommendation with Gated Recurrent Units and Thompson
Sampling
|
stat.ML cs.LG
|
We consider the problem of recommending relevant content to users of an
internet platform in the form of lists of items, called slates. We introduce a
variational Bayesian Recurrent Neural Net recommender system that acts on time
series of interactions between the internet platform and the user, and which
scales to real world industrial situations. The recommender system is tested
both online on real users, and on an offline dataset collected from a Norwegian
web-based marketplace, FINN.no, that is made public for research. This is one
of the first publicly available datasets which includes all the slates that are
presented to users as well as which items (if any) in the slates were clicked
on. Such a data set allows us to move beyond the common assumption that
implicitly assumes that users are considering all possible items at each
interaction. Instead we build our likelihood using the items that are actually
in the slate, and evaluate the strengths and weaknesses of both approaches
theoretically and in experiments. We also introduce a hierarchical prior for
the item parameters based on group memberships. Both item parameters and user
preferences are learned probabilistically. Furthermore, we combine our model
with bandit strategies to ensure learning, and introduce `in-slate Thompson
Sampling' which makes use of the slates to maximise explorative opportunities.
We show experimentally that explorative recommender strategies perform on par
or above their greedy counterparts. Even without making use of exploration to
learn more effectively, click rates increase simply because of improved
diversity in the recommended slates.
|
Machine Learning, Machine Learning
|
Statistics
|
2211.01903
|
Luca Rendsburg
|
A Consistent Estimator for Confounding Strength
|
stat.ML cs.LG
|
Regression on observational data can fail to capture a causal relationship in
the presence of unobserved confounding. Confounding strength measures this
mismatch, but estimating it requires itself additional assumptions. A common
assumption is the independence of causal mechanisms, which relies on
concentration phenomena in high dimensions. While high dimensions enable the
estimation of confounding strength, they also necessitate adapted estimators.
In this paper, we derive the asymptotic behavior of the confounding strength
estimator by Janzing and Sch\"olkopf (2018) and show that it is generally not
consistent. We then use tools from random matrix theory to derive an adapted,
consistent estimator.
|
Machine Learning, Machine Learning
|
Statistics
|
2010.11994
|
Kaito Ariu
|
Thresholded Lasso Bandit
|
stat.ML cs.LG
|
In this paper, we revisit the regret minimization problem in sparse
stochastic contextual linear bandits, where feature vectors may be of large
dimension $d$, but where the reward function depends on a few, say $s_0\ll d$,
of these features only. We present Thresholded Lasso bandit, an algorithm that
(i) estimates the vector defining the reward function as well as its sparse
support, i.e., significant feature elements, using the Lasso framework with
thresholding, and (ii) selects an arm greedily according to this estimate
projected on its support. The algorithm does not require prior knowledge of the
sparsity index $s_0$ and can be parameter-free under some symmetric
assumptions. For this simple algorithm, we establish non-asymptotic regret
upper bounds scaling as $\mathcal{O}( \log d + \sqrt{T} )$ in general, and as
$\mathcal{O}( \log d + \log T)$ under the so-called margin condition (a
probabilistic condition on the separation of the arm rewards). The regret of
previous algorithms scales as $\mathcal{O}( \log d + \sqrt{T \log (d T)})$ and
$\mathcal{O}( \log T \log d)$ in the two settings, respectively. Through
numerical experiments, we confirm that our algorithm outperforms existing
methods.
|
Machine Learning, Machine Learning
|
Statistics
|
1605.08636
|
Pascal Germain
|
PAC-Bayesian Theory Meets Bayesian Inference
|
stat.ML cs.LG
|
We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the
Bayesian marginal likelihood. That is, for the negative log-likelihood loss
function, we show that the minimization of PAC-Bayesian generalization risk
bounds maximizes the Bayesian marginal likelihood. This provides an alternative
explanation to the Bayesian Occam's razor criteria, under the assumption that
the data is generated by an i.i.d distribution. Moreover, as the negative
log-likelihood is an unbounded loss function, we motivate and propose a
PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that
our approach is sound on classical Bayesian linear regression tasks.
|
Machine Learning, Machine Learning
|
Statistics
|
1803.10840
|
Uri Shaham
|
Defending against Adversarial Images using Basis Functions
Transformations
|
stat.ML cs.LG
|
We study the effectiveness of various approaches that defend against
adversarial attacks on deep networks via manipulations based on basis function
representations of images. Specifically, we experiment with low-pass filtering,
PCA, JPEG compression, low resolution wavelet approximation, and
soft-thresholding. We evaluate these defense techniques using three types of
popular attacks in black, gray and white-box settings. Our results show JPEG
compression tends to outperform the other tested defenses in most of the
settings considered, in addition to soft-thresholding, which performs well in
specific cases, and yields a more mild decrease in accuracy on benign examples.
In addition, we also mathematically derive a novel white-box attack in which
the adversarial perturbation is composed only of terms corresponding a to
pre-determined subset of the basis functions, of which a "low frequency attack"
is a special case.
|
Machine Learning, Machine Learning
|
Statistics
|
2308.14142
|
Talay Cheema
|
Integrated Variational Fourier Features for Fast Spatial Modelling with
Gaussian Processes
|
stat.ML cs.LG
|
Sparse variational approximations are popular methods for scaling up
inference and learning in Gaussian processes to larger datasets. For $N$
training points, exact inference has $O(N^3)$ cost; with $M \ll N$ features,
state of the art sparse variational methods have $O(NM^2)$ cost. Recently,
methods have been proposed using more sophisticated features; these promise
$O(M^3)$ cost, with good performance in low dimensional tasks such as spatial
modelling, but they only work with a very limited class of kernels, excluding
some of the most commonly used. In this work, we propose integrated Fourier
features, which extends these performance benefits to a very broad class of
stationary covariance functions. We motivate the method and choice of
parameters from a convergence analysis and empirical exploration, and show
practical speedup in synthetic and real world spatial regression tasks.
|
Machine Learning, Machine Learning
|
Statistics
|
2107.01658
|
Aramayis Dallakyan
|
Learning Bayesian Networks through Birkhoff Polytope: A Relaxation
Method
|
stat.ML cs.LG
|
We establish a novel framework for learning a directed acyclic graph (DAG)
when data are generated from a Gaussian, linear structural equation model. It
consists of two parts: (1) introduce a permutation matrix as a new parameter
within a regularized Gaussian log-likelihood to represent variable ordering;
and (2) given the ordering, estimate the DAG structure through sparse Cholesky
factor of the inverse covariance matrix. For permutation matrix estimation, we
propose a relaxation technique that avoids the NP-hard combinatorial problem of
order estimation. Given an ordering, a sparse Cholesky factor is estimated
using a cyclic coordinatewise descent algorithm which decouples row-wise. Our
framework recovers DAGs without the need for an expensive verification of the
acyclicity constraint or enumeration of possible parent sets. We establish
numerical convergence of the algorithm, and consistency of the Cholesky factor
estimator when the order of variables is known. Through several simulated and
macro-economic datasets, we study the scope and performance of the proposed
methodology.
|
Machine Learning, Machine Learning
|
Statistics
|
2201.10780
|
Tianyu Wang
|
On Sharp Stochastic Zeroth Order Hessian Estimators over Riemannian
Manifolds
|
stat.ML cs.LG cs.NA math.NA
|
We study Hessian estimators for functions defined over an $n$-dimensional
complete analytic Riemannian manifold. We introduce new stochastic zeroth-order
Hessian estimators using $O (1)$ function evaluations. We show that, for an
analytic real-valued function $f$, our estimator achieves a bias bound of order
$ O \left( \gamma \delta^2 \right) $, where $ \gamma $ depends on both the
Levi-Civita connection and function $f$, and $\delta$ is the finite difference
step size. To the best of our knowledge, our results provide the first bias
bound for Hessian estimators that explicitly depends on the geometry of the
underlying Riemannian manifold. We also study downstream computations based on
our Hessian estimators. The supremacy of our method is evidenced by empirical
evaluations.
|
Machine Learning, Machine Learning, Numerical Analysis, Numerical Analysis
|
Statistics
|
1612.01251
|
Pedro Tabacof
|
Known Unknowns: Uncertainty Quality in Bayesian Neural Networks
|
stat.ML cs.LG cs.NE
|
We evaluate the uncertainty quality in neural networks using anomaly
detection. We extract uncertainty measures (e.g. entropy) from the predictions
of candidate models, use those measures as features for an anomaly detector,
and gauge how well the detector differentiates known from unknown classes. We
assign higher uncertainty quality to candidate models that lead to better
detectors. We also propose a novel method for sampling a variational
approximation of a Bayesian neural network, called One-Sample Bayesian
Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We
compare the following candidate neural network models: Maximum Likelihood,
Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational
approximation. We show that Bayesian Dropout and OSBA provide better
uncertainty information than Maximum Likelihood, and are essentially equivalent
to the standard variational approximation, but much faster.
|
Machine Learning, Machine Learning, Neural and Evolutionary Computing
|
Statistics
|
1906.02773
|
Ari Morcos
|
One ticket to win them all: generalizing lottery ticket initializations
across datasets and optimizers
|
stat.ML cs.LG cs.NE
|
The success of lottery ticket initializations (Frankle and Carbin, 2019)
suggests that small, sparsified networks can be trained so long as the network
is initialized appropriately. Unfortunately, finding these "winning ticket"
initializations is computationally expensive. One potential solution is to
reuse the same winning tickets across a variety of datasets and optimizers.
However, the generality of winning ticket initializations remains unclear.
Here, we attempt to answer this question by generating winning tickets for one
training configuration (optimizer and dataset) and evaluating their performance
on another configuration. Perhaps surprisingly, we found that, within the
natural images domain, winning ticket initializations generalized across a
variety of datasets, including Fashion MNIST, SVHN, CIFAR-10/100, ImageNet, and
Places365, often achieving performance close to that of winning tickets
generated on the same dataset. Moreover, winning tickets generated using larger
datasets consistently transferred better than those generated using smaller
datasets. We also found that winning ticket initializations generalize across
optimizers with high performance. These results suggest that winning ticket
initializations generated by sufficiently large datasets contain inductive
biases generic to neural networks more broadly which improve training across
many settings and provide hope for the development of better initialization
methods.
|
Machine Learning, Machine Learning, Neural and Evolutionary Computing
|
Statistics
|
2105.06031
|
Yifeng Fan
|
Joint Community Detection and Rotational Synchronization via
Semidefinite Programming
|
stat.ML cs.LG cs.SI
|
In the presence of heterogeneous data, where randomly rotated objects fall
into multiple underlying categories, it is challenging to simultaneously
classify them into clusters and synchronize them based on pairwise relations.
This gives rise to the joint problem of community detection and
synchronization. We propose a series of semidefinite relaxations, and prove
their exact recovery when extending the celebrated stochastic block model to
this new setting where both rotations and cluster identities are to be
determined. Numerical experiments demonstrate the efficacy of our proposed
algorithms and confirm our theoretical result which indicates a sharp phase
transition for exact recovery.
|
Machine Learning, Machine Learning, Social and Information Networks
|
Statistics
|
2005.04112
|
Arun Venkitaraman
|
On Training and Evaluation of Neural Network Approaches for Model
Predictive Control
|
stat.ML cs.LG cs.SY eess.SY
|
The contribution of this paper is a framework for training and evaluation of
Model Predictive Control (MPC) implemented using constrained neural networks.
Recent studies have proposed to use neural networks with differentiable convex
optimization layers to implement model predictive controllers. The motivation
is to replace real-time optimization in safety critical feedback control
systems with learnt mappings in the form of neural networks with optimization
layers. Such mappings take as the input the state vector and predict the
control law as the output. The learning takes place using training data
generated from off-line MPC simulations. However, a general framework for
characterization of learning approaches in terms of both model validation and
efficient training data generation is lacking in literature. In this paper, we
take the first steps towards developing such a coherent framework. We discuss
how the learning problem has similarities with system identification, in
particular input design, model structure selection and model validation. We
consider the study of neural network architectures in PyTorch with the explicit
MPC constraints implemented as a differentiable optimization layer using CVXPY.
We propose an efficient approach of generating MPC input samples subject to the
MPC model constraints using a hit-and-run sampler. The corresponding true
outputs are generated by solving the MPC offline using OSOP. We propose
different metrics to validate the resulting approaches. Our study further aims
to explore the advantages of incorporating domain knowledge into the network
structure from a training and evaluation perspective. Different model
structures are numerically tested using the proposed framework in order to
obtain more insights in the properties of constrained neural networks based
MPC.
|
Machine Learning, Machine Learning, Systems and Control, Systems and Control
|
Statistics
|
2103.03635
|
Arthur Charpentier
|
Autocalibration and Tweedie-dominance for Insurance Pricing with Machine
Learning
|
stat.ML cs.LG econ.EM
|
Boosting techniques and neural networks are particularly effective machine
learning methods for insurance pricing. Often in practice, there are
nevertheless endless debates about the choice of the right loss function to be
used to train the machine learning model, as well as about the appropriate
metric to assess the performances of competing models. Also, the sum of fitted
values can depart from the observed totals to a large extent and this often
confuses actuarial analysts. The lack of balance inherent to training models by
minimizing deviance outside the familiar GLM with canonical link setting has
been empirically documented in W\"uthrich (2019, 2020) who attributes it to the
early stopping rule in gradient descent methods for model fitting. The present
paper aims to further study this phenomenon when learning proceeds by
minimizing Tweedie deviance. It is shown that minimizing deviance involves a
trade-off between the integral of weighted differences of lower partial moments
and the bias measured on a specific scale. Autocalibration is then proposed as
a remedy. This new method to correct for bias adds an extra local GLM step to
the analysis. Theoretically, it is shown that it implements the autocalibration
concept in pure premium calculation and ensures that balance also holds on a
local scale, not only at portfolio level as with existing bias-correction
techniques. The convex order appears to be the natural tool to compare
competing models, putting a new light on the diagnostic graphs and associated
metrics proposed by Denuit et al. (2019).
|
Machine Learning, Machine Learning, Econometrics
|
Statistics
|
1810.03743
|
LuoLuo Liu
|
JOBS: Joint-Sparse Optimization from Bootstrap Samples
|
stat.ML cs.LG eess.SP
|
Classical signal recovery based on $\ell_1$ minimization solves the least
squares problem with all available measurements via sparsity-promoting
regularization. In practice, it is often the case that not all measurements are
available or required for recovery. Measurements might be corrupted/missing or
they arrive sequentially in streaming fashion. In this paper, we propose a
global sparse recovery strategy based on subsets of measurements, named JOBS,
in which multiple measurements vectors are generated from the original pool of
measurements via bootstrapping, and then a joint-sparse constraint is enforced
to ensure support consistency among multiple predictors. The final estimate is
obtained by averaging over the $K$ predictors. The performance limits
associated with different choices of number of bootstrap samples $L$ and number
of estimates $K$ is analyzed theoretically. Simulation results validate some of
the theoretical analysis, and show that the proposed method yields
state-of-the-art recovery performance, outperforming $\ell_1$ minimization and
a few other existing bootstrap-based techniques in the challenging case of low
levels of measurements and is preferable over other bagging-based methods in
the streaming setting since it performs better with small $K$ and $L$ for
data-sets with large sizes.
|
Machine Learning, Machine Learning, Signal Processing
|
Statistics
|
2105.14866
|
Alexander Camuto
|
Variational Autoencoders: A Harmonic Perspective
|
stat.ML cs.LG eess.SP
|
In this work we study Variational Autoencoders (VAEs) from the perspective of
harmonic analysis. By viewing a VAE's latent space as a Gaussian Space, a
variety of measure space, we derive a series of results that show that the
encoder variance of a VAE controls the frequency content of the functions
parameterised by the VAE encoder and decoder neural networks. In particular we
demonstrate that larger encoder variances reduce the high frequency content of
these functions. Our analysis allows us to show that increasing this variance
effectively induces a soft Lipschitz constraint on the decoder network of a
VAE, which is a core contributor to the adversarial robustness of VAEs. We
further demonstrate that adding Gaussian noise to the input of a VAE allows us
to more finely control the frequency content and the Lipschitz constant of the
VAE encoder networks. To support our theoretical analysis we run experiments
with VAEs with small fully-connected neural networks and with larger
convolutional networks, demonstrating empirically that our theory holds for a
variety of neural network architectures.
|
Machine Learning, Machine Learning, Signal Processing
|
Statistics
|
2211.11103
|
Steffen Ridderbusch
|
The Past Does Matter: Correlation of Subsequent States in Trajectory
Predictions of Gaussian Process Models
|
stat.ML cs.LG math.DS
|
Computing the distribution of trajectories from a Gaussian Process model of a
dynamical system is an important challenge in utilizing such models. Motivated
by the computational cost of sampling-based approaches, we consider
approximations of the model's output and trajectory distribution. We show that
previous work on uncertainty propagation, focussed on discrete state-space
models, incorrectly included an independence assumption between subsequent
states of the predicted trajectories. Expanding these ideas to continuous
ordinary differential equation models, we illustrate the implications of this
assumption and propose a novel piecewise linear approximation of Gaussian
Processes to mitigate them.
|
Machine Learning, Machine Learning, Dynamical Systems
|
Statistics
|
2405.08253
|
Alba Olivares Nadal
|
Thompson Sampling for Infinite-Horizon Discounted Decision Processes
|
stat.ML cs.LG math.OC
|
We model a Markov decision process, parametrized by an unknown parameter, and
study the asymptotic behavior of a sampling-based algorithm, called Thompson
sampling. The standard definition of regret is not always suitable to evaluate
a policy, especially when the underlying chain structure is general. We show
that the standard (expected) regret can grow (super-)linearly and fails to
capture the notion of learning in realistic settings with non-trivial state
evolution. By decomposing the standard (expected) regret, we develop a new
metric, called the expected residual regret, which forgets the immutable
consequences of past actions. Instead, it measures regret against the optimal
reward moving forward from the current period. We show that the expected
residual regret of the Thompson sampling algorithm is upper bounded by a term
which converges exponentially fast to 0. We present conditions under which the
posterior sampling error of Thompson sampling converges to 0 almost surely. We
then introduce the probabilistic version of the expected residual regret and
present conditions under which it converges to 0 almost surely. Thus, we
provide a viable concept of learning for sampling algorithms which will serve
useful in broader settings than had been considered previously.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
1806.03763
|
Thomas Pumir
|
Smoothed analysis of the low-rank approach for smooth semidefinite
programs
|
stat.ML cs.LG math.OC
|
We consider semidefinite programs (SDPs) of size n with equality constraints.
In order to overcome scalability issues, Burer and Monteiro proposed a
factorized approach based on optimizing over a matrix Y of size $n$ by $k$ such
that $X = YY^*$ is the SDP variable. The advantages of such formulation are
twofold: the dimension of the optimization variable is reduced and positive
semidefiniteness is naturally enforced. However, the problem in Y is
non-convex. In prior work, it has been shown that, when the constraints on the
factorized variable regularly define a smooth manifold, provided k is large
enough, for almost all cost matrices, all second-order stationary points
(SOSPs) are optimal. Importantly, in practice, one can only compute points
which approximately satisfy necessary optimality conditions, leading to the
question: are such points also approximately optimal? To this end, and under
similar assumptions, we use smoothed analysis to show that approximate SOSPs
for a randomly perturbed objective function are approximate global optima, with
k scaling like the square root of the number of constraints (up to log
factors). Moreover, we bound the optimality gap at the approximate solution of
the perturbed problem with respect to the original problem. We particularize
our results to an SDP relaxation of phase retrieval.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
2311.06138
|
Jiyoung Park
|
Minimum norm interpolation by perceptra: Explicit regularization and
implicit bias
|
stat.ML cs.LG math.OC
|
We investigate how shallow ReLU networks interpolate between known regions.
Our analysis shows that empirical risk minimizers converge to a minimum norm
interpolant as the number of data points and parameters tends to infinity when
a weight decay regularizer is penalized with a coefficient which vanishes at a
precise rate as the network width and the number of data points grow. With and
without explicit regularization, we numerically study the implicit bias of
common optimization algorithms towards known minimum norm interpolants.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
1312.5023
|
Matt Wytock
|
Contextually Supervised Source Separation with Application to Energy
Disaggregation
|
stat.ML cs.LG math.OC
|
We propose a new framework for single-channel source separation that lies
between the fully supervised and unsupervised setting. Instead of supervision,
we provide input features for each source signal and use convex methods to
estimate the correlations between these features and the unobserved signal
decomposition. We analyze the case of $\ell_2$ loss theoretically and show that
recovery of the signal components depends only on cross-correlation between
features for different signals, not on correlations between features for the
same signal. Contextually supervised source separation is a natural fit for
domains with large amounts of data but no explicit supervision; our motivating
application is energy disaggregation of hourly smart meter data (the separation
of whole-home power signals into different energy uses). Here we apply
contextual supervision to disaggregate the energy usage of thousands homes over
four years, a significantly larger scale than previously published efforts, and
demonstrate on synthetic data that our method outperforms the unsupervised
approach.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
1303.5145
|
Karthik Mohan
|
Node-Based Learning of Multiple Gaussian Graphical Models
|
stat.ML cs.LG math.OC
|
We consider the problem of estimating high-dimensional Gaussian graphical
models corresponding to a single set of variables under several distinct
conditions. This problem is motivated by the task of recovering transcriptional
regulatory networks on the basis of gene expression data {containing
heterogeneous samples, such as different disease states, multiple species, or
different developmental stages}. We assume that most aspects of the conditional
dependence networks are shared, but that there are some structured differences
between them. Rather than assuming that similarities and differences between
networks are driven by individual edges, we take a node-based approach, which
in many cases provides a more intuitive interpretation of the network
differences. We consider estimation under two distinct assumptions: (1)
differences between the K networks are due to individual nodes that are
perturbed across conditions, or (2) similarities among the K networks are due
to the presence of common hub nodes that are shared across all K networks.
Using a row-column overlap norm penalty function, we formulate two convex
optimization problems that correspond to these two assumptions. We solve these
problems using an alternating direction method of multipliers algorithm, and we
derive a set of necessary and sufficient conditions that allows us to decompose
the problem into independent subproblems so that our algorithm can be scaled to
high-dimensional settings. Our proposal is illustrated on synthetic data, a
webpage data set, and a brain cancer gene expression data set.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
2112.14738
|
Junchi Li
|
Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems
|
stat.ML cs.LG math.OC
|
Motivated by the problem of online canonical correlation analysis, we propose
the \emph{Stochastic Scaled-Gradient Descent} (SSGD) algorithm for minimizing
the expectation of a stochastic function over a generic Riemannian manifold.
SSGD generalizes the idea of projected stochastic gradient descent and allows
the use of scaled stochastic gradients instead of stochastic gradients. In the
special case of a spherical constraint, which arises in generalized eigenvector
problems, we establish a nonasymptotic finite-sample bound of $\sqrt{1/T}$, and
show that this rate is minimax optimal, up to a polylogarithmic factor of
relevant parameters. On the asymptotic side, a novel trajectory-averaging
argument allows us to achieve local asymptotic normality with a rate that
matches that of Ruppert-Polyak-Juditsky averaging. We bring these ideas
together in an application to online canonical correlation analysis, deriving,
for the first time in the literature, an optimal one-time-scale algorithm with
an explicit rate of local asymptotic convergence to normality. Numerical
studies of canonical correlation analysis are also provided for synthetic data.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
1911.05934
|
Raul Astudillo
|
Multi-Attribute Bayesian Optimization With Interactive Preference
Learning
|
stat.ML cs.LG math.OC
|
We consider black-box global optimization of time-consuming-to-evaluate
functions on behalf of a decision-maker (DM) whose preferences must be learned.
Each feasible design is associated with a time-consuming-to-evaluate vector of
attributes and each vector of attributes is assigned a utility by the DM's
utility function, which may be learned approximately using preferences
expressed over pairs of attribute vectors. Past work has used a point estimate
of this utility function as if it were error-free within single-objective
optimization. However, utility estimation errors may yield a poor suggested
design. Furthermore, this approach produces a single suggested "best" design,
whereas DMs often prefer to choose from a menu. We propose a novel
multi-attribute Bayesian optimization with preference learning approach. Our
approach acknowledges the uncertainty in preference estimation and implicitly
chooses designs to evaluate that are good not just for a single estimated
utility function but a range of likely ones. The outcome of our approach is a
menu of designs and evaluated attributes from which the DM makes a final
selection. We demonstrate the value and flexibility of our approach in a
variety of experiments.
|
Machine Learning, Machine Learning, Optimization and Control
|
Statistics
|
2007.06352
|
Valentin De Bortoli
|
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
|
stat.ML cs.LG math.PR
|
In this paper, we investigate the limiting behavior of a continuous-time
counterpart of the Stochastic Gradient Descent (SGD) algorithm applied to
two-layer overparameterized neural networks, as the number or neurons (ie, the
size of the hidden layer) $N \to +\infty$. Following a probabilistic approach,
we show 'propagation of chaos' for the particle system defined by this
continuous-time dynamics under different scenarios, indicating that the
statistical interaction between the particles asymptotically vanishes. In
particular, we establish quantitative convergence with respect to $N$ of any
particle to a solution of a mean-field McKean-Vlasov equation in the metric
space endowed with the Wasserstein distance. In comparison to previous works on
the subject, we consider settings in which the sequence of stepsizes in SGD can
potentially depend on the number of neurons and the iterations. We then
identify two regimes under which different mean-field limits are obtained, one
of them corresponding to an implicitly regularized version of the minimization
problem at hand. We perform various experiments on real datasets to validate
our theoretical results, assessing the existence of these two regimes on
classification problems and illustrating our convergence results.
|
Machine Learning, Machine Learning, Probability
|
Statistics
|
2102.07586
|
Pablo Jim\'enez
|
On Riemannian Stochastic Approximation Schemes with Fixed Step-Size
|
stat.ML cs.LG math.PR
|
This paper studies fixed step-size stochastic approximation (SA) schemes,
including stochastic gradient schemes, in a Riemannian framework. It is
motivated by several applications, where geodesics can be computed explicitly,
and their use accelerates crude Euclidean methods. A fixed step-size scheme
defines a family of time-homogeneous Markov chains, parametrized by the
step-size. Here, using this formulation, non-asymptotic performance bounds are
derived, under Lyapunov conditions. Then, for any step-size, the corresponding
Markov chain is proved to admit a unique stationary distribution, and to be
geometrically ergodic. This result gives rise to a family of stationary
distributions indexed by the step-size, which is further shown to converge to a
Dirac measure, concentrated at the solution of the problem at hand, as the
step-size goes to 0. Finally, the asymptotic rate of this convergence is
established, through an asymptotic expansion of the bias, and a central limit
theorem.
|
Machine Learning, Machine Learning, Probability
|
Statistics
|
2207.00171
|
clement hardy
|
Off-the-grid learning of sparse mixtures from a continuous dictionary
|
stat.ML cs.LG math.PR math.ST stat.TH
|
We consider a general non-linear model where the signal is a finite mixture
of an unknown, possibly increasing, number of features issued from a continuous
dictionary parameterized by a real nonlinear parameter. The signal is observed
with Gaussian (possibly correlated) noise in either a continuous or a discrete
setup. We propose an off-the-grid optimization method, that is, a method which
does not use any discretization scheme on the parameter space, to estimate both
the non-linear parameters of the features and the linear parameters of the
mixture. We use recent results on the geometry of off-the-grid methods to give
minimal separation on the true underlying non-linear parameters such that
interpolating certificate functions can be constructed. Using also tail bounds
for suprema of Gaussian processes we bound the prediction error with high
probability. Assuming that the certificate functions can be constructed, our
prediction error bound is up to log --factors similar to the rates attained by
the Lasso predictor in the linear regression model. We also establish
convergence rates that quantify with high probability the quality of estimation
for both the linear and the non-linear parameters.
|
Machine Learning, Machine Learning, Probability, Statistics Theory, Statistics Theory
|
Statistics
|
2008.02479
|
Mikkel Slot Nielsen
|
Modeling of time series using random forests: theoretical developments
|
stat.ML cs.LG math.ST stat.ME stat.TH
|
In this paper we study asymptotic properties of random forests within the
framework of nonlinear time series modeling. While random forests have been
successfully applied in various fields, the theoretical justification has not
been considered for their use in a time series setting. Under mild conditions,
we prove a uniform concentration inequality for regression trees built on
nonlinear autoregressive processes and, subsequently, we use this result to
prove consistency for a large class of random forests. The results are
supported by various simulations.
|
Machine Learning, Machine Learning, Statistics Theory, Methodology, Statistics Theory
|
Statistics
|
2311.02695
|
Simon Bing
|
Identifying Linearly-Mixed Causal Representations from Multi-Node
Interventions
|
stat.ML cs.LG math.ST stat.ME stat.TH
|
The task of inferring high-level causal variables from low-level
observations, commonly referred to as causal representation learning, is
fundamentally underconstrained. As such, recent works to address this problem
focus on various assumptions that lead to identifiability of the underlying
latent causal variables. A large corpus of these preceding approaches consider
multi-environment data collected under different interventions on the causal
model. What is common to virtually all of these works is the restrictive
assumption that in each environment, only a single variable is intervened on.
In this work, we relax this assumption and provide the first identifiability
result for causal representation learning that allows for multiple variables to
be targeted by an intervention within one environment. Our approach hinges on a
general assumption on the coverage and diversity of interventions across
environments, which also includes the shared assumption of single-node
interventions of previous works. The main idea behind our approach is to
exploit the trace that interventions leave on the variance of the ground truth
causal variables and regularizing for a specific notion of sparsity with
respect to this trace. In addition to and inspired by our theoretical
contributions, we present a practical algorithm to learn causal representations
from multi-node interventional data and provide empirical evidence that
validates our identifiability results.
|
Machine Learning, Machine Learning, Statistics Theory, Methodology, Statistics Theory
|
Statistics
|
2010.10436
|
Lorenz Richter
|
VarGrad: A Low-Variance Gradient Estimator for Variational Inference
|
stat.ML cs.LG math.ST stat.TH
|
We analyse the properties of an unbiased gradient estimator of the ELBO for
variational inference, based on the score function method with leave-one-out
control variates. We show that this gradient estimator can be obtained using a
new loss, defined as the variance of the log-ratio between the exact posterior
and the variational approximation, which we call the $\textit{log-variance
loss}$. Under certain conditions, the gradient of the log-variance loss equals
the gradient of the (negative) ELBO. We show theoretically that this gradient
estimator, which we call $\textit{VarGrad}$ due to its connection to the
log-variance loss, exhibits lower variance than the score function method in
certain settings, and that the leave-one-out control variate coefficients are
close to the optimal ones. We empirically demonstrate that VarGrad offers a
favourable variance versus computation trade-off compared to other
state-of-the-art estimators on a discrete VAE.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
2007.03253
|
Stefano Peluchetti
|
Doubly infinite residual neural networks: a diffusion process approach
|
stat.ML cs.LG math.ST stat.TH
|
Modern neural networks (NN) featuring a large number of layers (depth) and
units per layer (width) have achieved a remarkable performance across many
domains. While there exists a vast literature on the interplay between
infinitely wide NNs and Gaussian processes, a little is known about analogous
interplays with respect to infinitely deep NNs. NNs with independent and
identically distributed (i.i.d.) initializations exhibit undesirable forward
and backward propagation properties as the number of layers increases. To
overcome these drawbacks, Peluchetti and Favaro (2020) considered
fully-connected residual networks (ResNets) with network's parameters
initialized by means of distributions that shrink as the number of layers
increases, thus establishing an interplay between infinitely deep ResNets and
solutions to stochastic differential equations, i.e. diffusion processes, and
showing that infinitely deep ResNets does not suffer from undesirable
forward-propagation properties. In this paper, we review the results of
Peluchetti and Favaro (2020), extending them to convolutional ResNets, and we
establish analogous backward-propagation results, which directly relate to the
problem of training fully-connected deep ResNets. Then, we investigate the more
general setting of doubly infinite NNs, where both network's width and
network's depth grow unboundedly. We focus on doubly infinite fully-connected
ResNets, for which we consider i.i.d. initializations. Under this setting, we
show that the dynamics of quantities of interest converge, at initialization,
to deterministic limits. This allow us to provide analytical expressions for
inference, both in the case of weakly trained and fully trained ResNets. Our
results highlight a limited expressive power of doubly infinite ResNets when
the unscaled network's parameters are i.i.d. and the residual blocks are
shallow.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
2104.14012
|
Leszek Szczecinski
|
Simplified Kalman filter for online rating: one-fits-all approach
|
stat.ML cs.LG math.ST stat.TH
|
In this work, we deal with the problem of rating in sports, where the skills
of the players/teams are inferred from the observed outcomes of the games. Our
focus is on the online rating algorithms which estimate the skills after each
new game by exploiting the probabilistic models of the relationship between the
skills and the game outcome. We propose a Bayesian approach which may be seen
as an approximate Kalman filter and which is generic in the sense that it can
be used with any skills-outcome model and can be applied in the individual --
as well as in the group-sports. We show how the well-know algorithms (such as
the Elo, the Glicko, and the TrueSkill algorithms) may be seen as instances of
the one-fits-all approach we propose. In order to clarify the conditions under
which the gains of the Bayesian approach over the simpler solutions can
actually materialize, we critically compare the known and the new algorithms by
means of numerical examples using the synthetic as well as the empirical data.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
2310.19041
|
Shulei Wang
|
On Linear Separation Capacity of Self-Supervised Representation Learning
|
stat.ML cs.LG math.ST stat.TH
|
Recent advances in self-supervised learning have highlighted the efficacy of
data augmentation in learning data representation from unlabeled data. Training
a linear model atop these enhanced representations can yield an adept
classifier. Despite the remarkable empirical performance, the underlying
mechanisms that enable data augmentation to unravel nonlinear data structures
into linearly separable representations remain elusive. This paper seeks to
bridge this gap by investigating under what conditions learned representations
can linearly separate manifolds when data is drawn from a multi-manifold model.
Our investigation reveals that data augmentation offers additional information
beyond observed data and can thus improve the information-theoretic optimal
rate of linear separation capacity. In particular, we show that self-supervised
learning can linearly separate manifolds with a smaller distance than
unsupervised learning, underscoring the additional benefits of data
augmentation. Our theoretical analysis further underscores that the performance
of downstream linear classifiers primarily hinges on the linear separability of
data representations rather than the size of the labeled data set, reaffirming
the viability of constructing efficient classifiers with limited labeled data
amid an expansive unlabeled data set.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
2405.01994
|
Patrick Saux
|
Mathematics of statistical sequential decision-making: concentration,
risk-awareness and modelling in stochastic bandits, with applications to
bariatric surgery
|
stat.ML cs.LG math.ST stat.TH
|
This thesis aims to study some of the mathematical challenges that arise in
the analysis of statistical sequential decision-making algorithms for
postoperative patients follow-up. Stochastic bandits (multiarmed, contextual)
model the learning of a sequence of actions (policy) by an agent in an
uncertain environment in order to maximise observed rewards. To learn optimal
policies, bandit algorithms have to balance the exploitation of current
knowledge and the exploration of uncertain actions. Such algorithms have
largely been studied and deployed in industrial applications with large
datasets, low-risk decisions and clear modelling assumptions, such as
clickthrough rate maximisation in online advertising. By contrast, digital
health recommendations call for a whole new paradigm of small samples,
risk-averse agents and complex, nonparametric modelling. To this end, we
developed new safe, anytime-valid concentration bounds, (Bregman, empirical
Chernoff), introduced a new framework for risk-aware contextual bandits (with
elicitable risk measures) and analysed a novel class of nonparametric bandit
algorithms under weak assumptions (Dirichlet sampling). In addition to the
theoretical guarantees, these results are supported by in-depth empirical
evidence. Finally, as a first step towards personalised postoperative follow-up
recommendations, we developed with medical doctors and surgeons an
interpretable machine learning model to predict the long-term weight
trajectories of patients after bariatric surgery.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
1806.05139
|
Carson Eisenach
|
High-Dimensional Inference for Cluster-Based Graphical Models
|
stat.ML cs.LG math.ST stat.TH
|
Motivated by modern applications in which one constructs graphical models
based on a very large number of features, this paper introduces a new class of
cluster-based graphical models, in which variable clustering is applied as an
initial step for reducing the dimension of the feature space. We employ model
assisted clustering, in which the clusters contain features that are similar to
the same unobserved latent variable. Two different cluster-based Gaussian
graphical models are considered: the latent variable graph, corresponding to
the graphical model associated with the unobserved latent variables, and the
cluster-average graph, corresponding to the vector of features averaged over
clusters. Our study reveals that likelihood based inference for the latent
graph, not analyzed previously, is analytically intractable. Our main
contribution is the development and analysis of alternative estimation and
inference strategies, for the precision matrix of an unobservable latent vector
$Z$. We replace the likelihood of the data by an appropriate class of empirical
risk functions, that can be specialized to the latent graphical model and to
the simpler, but under-analyzed, cluster-average graphical model. The
estimators thus derived can be used for inference on the graph structure, for
instance on edge strength or pattern recovery. Inference is based on the
asymptotic limits of the entry-wise estimates of the precision matrices
associated with the conditional independence graphs under consideration. While
taking the uncertainty induced by the clustering step into account, we
establish Berry-Esseen central limit theorems for the proposed estimators. It
is noteworthy that, although the clusters are estimated adaptively from the
data, the central limit theorems regarding the entries of the estimated graphs
are proved under the same conditions one would use if the clusters were
known....
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
1705.03439
|
Yixin Wang
|
Frequentist Consistency of Variational Bayes
|
stat.ML cs.LG math.ST stat.TH
|
A key challenge for modern Bayesian statistics is how to perform scalable
inference of posterior distributions. To address this challenge, variational
Bayes (VB) methods have emerged as a popular alternative to the classical
Markov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while
achieving comparable predictive performance. However, there are few theoretical
results around VB. In this paper, we establish frequentist consistency and
asymptotic normality of VB methods. Specifically, we connect VB methods to
point estimates based on variational approximations, called frequentist
variational approximations, and we use the connection to prove a variational
Bernstein-von Mises theorem. The theorem leverages the theoretical
characterizations of frequentist variational approximations to understand
asymptotic properties of VB. In summary, we prove that (1) the VB posterior
converges to the Kullback-Leibler (KL) minimizer of a normal distribution,
centered at the truth and (2) the corresponding variational expectation of the
parameter is consistent and asymptotically normal. As applications of the
theorem, we derive asymptotic properties of VB posteriors in Bayesian mixture
models, Bayesian generalized linear mixed models, and Bayesian stochastic block
models. We conduct a simulation study to illustrate these theoretical results.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
1902.09917
|
Remi Jezequel
|
Efficient online learning with kernels for adversarial large scale
problems
|
stat.ML cs.LG math.ST stat.TH
|
We are interested in a framework of online learning with kernels for
low-dimensional but large-scale and potentially adversarial datasets. We study
the computational and theoretical performance of online variations of kernel
Ridge regression. Despite its simplicity, the algorithm we study is the first
to achieve the optimal regret for a wide range of kernels with a per-round
complexity of order $n^\alpha$ with $\alpha < 2$. The algorithm we consider is
based on approximating the kernel with the linear span of basis functions. Our
contributions is two-fold: 1) For the Gaussian kernel, we propose to build the
basis beforehand (independently of the data) through Taylor expansion. For
$d$-dimensional inputs, we provide a (close to) optimal regret of order
$O((\log n)^{d+1})$ with per-round time complexity and space complexity
$O((\log n)^{2d})$. This makes the algorithm a suitable choice as soon as $n
\gg e^d$ which is likely to happen in a scenario with small dimensional and
large-scale dataset; 2) For general kernels with low effective dimension, the
basis functions are updated sequentially in a data-adaptive fashion by sampling
Nystr{\"o}m points. In this case, our algorithm improves the computational
trade-off known for online kernel regression.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
1309.7804
|
Michael I. Jordan
|
On statistics, computation and scalability
|
stat.ML cs.LG math.ST stat.TH
|
How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
|
Machine Learning, Machine Learning, Statistics Theory, Statistics Theory
|
Statistics
|
1701.03619
|
Ori Katz
|
Diffusion-based nonlinear filtering for multimodal data fusion with
application to sleep stage assessment
|
stat.ML cs.LG physics.data-an
|
The problem of information fusion from multiple data-sets acquired by
multimodal sensors has drawn significant research attention over the years. In
this paper, we focus on a particular problem setting consisting of a physical
phenomenon or a system of interest observed by multiple sensors. We assume that
all sensors measure some aspects of the system of interest with additional
sensor-specific and irrelevant components. Our goal is to recover the variables
relevant to the observed system and to filter out the nuisance effects of the
sensor-specific variables. We propose an approach based on manifold learning,
which is particularly suitable for problems with multiple modalities, since it
aims to capture the intrinsic structure of the data and relies on minimal prior
model knowledge. Specifically, we propose a nonlinear filtering scheme, which
extracts the hidden sources of variability captured by two or more sensors,
that are independent of the sensor-specific components. In addition to
presenting a theoretical analysis, we demonstrate our technique on real
measured data for the purpose of sleep stage assessment based on multiple,
multimodal sensor measurements. We show that without prior knowledge on the
different modalities and on the measured system, our method gives rise to a
data-driven representation that is well correlated with the underlying sleep
process and is robust to noise and sensor-specific effects.
|
Machine Learning, Machine Learning, Data Analysis, Statistics and Probability
|
Statistics
|
1805.10958
|
Laurence Aitchison
|
Discrete flow posteriors for variational inference in discrete dynamical
systems
|
stat.ML cs.LG q-bio.NC
|
Each training step for a variational autoencoder (VAE) requires us to sample
from the approximate posterior, so we usually choose simple (e.g. factorised)
approximate posteriors in which sampling is an efficient computation that fully
exploits GPU parallelism. However, such simple approximate posteriors are often
insufficient, as they eliminate statistical dependencies in the posterior.
While it is possible to use normalizing flow approximate posteriors for
continuous latents, some problems have discrete latents and strong statistical
dependencies. The most natural approach to model these dependencies is an
autoregressive distribution, but sampling from such distributions is inherently
sequential and thus slow. We develop a fast, parallel sampling procedure for
autoregressive distributions based on fixed-point iterations which enables
efficient and accurate variational inference in discrete state-space latent
variable dynamical systems. To optimize the variational bound, we considered
two ways to evaluate probabilities: inserting the relaxed samples directly into
the pmf for the discrete distribution, or converting to continuous logistic
latent variables and interpreting the K-step fixed-point iterations as a
normalizing flow. We found that converting to continuous latent variables gave
considerable additional scope for mismatch between the true and approximate
posteriors, which resulted in biased inferences, we thus used the former
approach. Using our fast sampling procedure, we were able to realize the
benefits of correlated posteriors, including accurate uncertainty estimates for
one cell, and accurate connectivity estimates for multiple cells, in an order
of magnitude less time.
|
Machine Learning, Machine Learning, Neurons and Cognition
|
Statistics
|
2009.11974
|
Farzana Nasrin
|
Bayesian Topological Learning for Classifying the Structure of
Biological Networks
|
stat.ML cs.LG q-bio.QM
|
Actin cytoskeleton networks generate local topological signatures due to the
natural variations in the number, size, and shape of holes of the networks.
Persistent homology is a method that explores these topological properties of
data and summarizes them as persistence diagrams. In this work, we analyze and
classify these filament networks by transforming them into persistence diagrams
whose variability is quantified via a Bayesian framework on the space of
persistence diagrams. The proposed generalized Bayesian framework adopts an
independent and identically distributed cluster point process characterization
of persistence diagrams and relies on a substitution likelihood argument. This
framework provides the flexibility to estimate the posterior cardinality
distribution of points in a persistence diagram and the posterior spatial
distribution simultaneously. We present a closed form of the posteriors under
the assumption of Gaussian mixtures and binomials for prior intensity and
cardinality respectively. Using this posterior calculation, we implement a
Bayes factor algorithm to classify the actin filament networks and benchmark it
against several state-of-the-art classification methods.
|
Machine Learning, Machine Learning, Quantitative Methods
|
Statistics
|
1911.06239
|
Aditya Narayan Ravi
|
Unreliable Multi-Armed Bandits: A Novel Approach to Recommendation
Systems
|
stat.ML cs.LG stat.AP
|
We use a novel modification of Multi-Armed Bandits to create a new model for
recommendation systems. We model the recommendation system as a bandit seeking
to maximize reward by pulling on arms with unknown rewards. The catch however
is that this bandit can only access these arms through an unreliable
intermediate that has some level of autonomy while choosing its arms. For
example, in a streaming website the user has a lot of autonomy while choosing
content they want to watch. The streaming sites can use targeted advertising as
a means to bias opinions of these users. Here the streaming site is the bandit
aiming to maximize reward and the user is the unreliable intermediate. We model
the intermediate as accessing states via a Markov chain. The bandit is allowed
to perturb this Markov chain. We prove fundamental theorems for this setting
after which we show a close-to-optimal Explore-Commit algorithm.
|
Machine Learning, Machine Learning, Applications
|
Statistics
|
2001.06880
|
Vidhi Lalchand Miss
|
A meta-algorithm for classification using random recursive tree
ensembles: A high energy physics application
|
stat.ML cs.LG stat.AP
|
The aim of this work is to propose a meta-algorithm for automatic
classification in the presence of discrete binary classes. Classifier learning
in the presence of overlapping class distributions is a challenging problem in
machine learning. Overlapping classes are described by the presence of
ambiguous areas in the feature space with a high density of points belonging to
both classes. This often occurs in real-world datasets, one such example is
numeric data denoting properties of particle decays derived from high-energy
accelerators like the Large Hadron Collider (LHC). A significant body of
research targeting the class overlap problem use ensemble classifiers to boost
the performance of algorithms by using them iteratively in multiple stages or
using multiple copies of the same model on different subsets of the input
training data. The former is called boosting and the latter is called bagging.
The algorithm proposed in this thesis targets a challenging classification
problem in high energy physics - that of improving the statistical significance
of the Higgs discovery. The underlying dataset used to train the algorithm is
experimental data built from the official ATLAS full-detector simulation with
Higgs events (signal) mixed with different background events (background) that
closely mimic the statistical properties of the signal generating class
overlap. The algorithm proposed is a variant of the classical boosted decision
tree which is known to be one of the most successful analysis techniques in
experimental physics. The algorithm utilizes a unified framework that combines
two meta-learning techniques - bagging and boosting. The results show that this
combination only works in the presence of a randomization trick in the base
learners.
|
Machine Learning, Machine Learning, Applications
|
Statistics
|
2009.14610
|
R\'emy Garnier
|
Concurrent Neural Network : A model of competition between times series
|
stat.ML cs.LG stat.AP
|
Competition between times series often arises in sales prediction, when
similar products are on sale on a marketplace. This article provides a model of
the presence of cannibalization between times series. This model creates a
"competitiveness" function that depends on external features such as price and
margin. It also provides a theoretical guaranty on the error of the model under
some reasonable conditions, and implement this model using a neural network to
compute this competitiveness function. This implementation outperforms other
traditional time series methods and classical neural networks for market share
prediction on a real-world data set.
|
Machine Learning, Machine Learning, Applications
|
Statistics
|
2306.10306
|
Hristos Tyralis
|
Deep Huber quantile regression networks
|
stat.ML cs.LG stat.AP
|
Typical machine learning regression applications aim to report the mean or
the median of the predictive probability distribution, via training with a
squared or an absolute error scoring function. The importance of issuing
predictions of more functionals of the predictive probability distribution
(quantiles and expectiles) has been recognized as a means to quantify the
uncertainty of the prediction. In deep learning (DL) applications, that is
possible through quantile and expectile regression neural networks (QRNN and
ERNN respectively). Here we introduce deep Huber quantile regression networks
(DHQRN) that nest QRNNs and ERNNs as edge cases. DHQRN can predict Huber
quantiles, which are more general functionals in the sense that they nest
quantiles and expectiles as limiting cases. The main idea is to train a deep
learning algorithm with the Huber quantile regression function, which is
consistent for the Huber quantile functional. As a proof of concept, DHQRN are
applied to predict house prices in Australia. In this context, predictive
performances of three DL architectures are discussed along with evidential
interpretation of results from an economic case study.
|
Machine Learning, Machine Learning, Applications
|
Statistics
|
2109.11765
|
Yurong Ling
|
Dimension Reduction for Data with Heterogeneous Missingness
|
stat.ML cs.LG stat.AP
|
Dimension reduction plays a pivotal role in analysing high-dimensional data.
However, observations with missing values present serious difficulties in
directly applying standard dimension reduction techniques. As a large number of
dimension reduction approaches are based on the Gram matrix, we first
investigate the effects of missingness on dimension reduction by studying the
statistical properties of the Gram matrix with or without missingness, and then
we present a bias-corrected Gram matrix with nice statistical properties under
heterogeneous missingness. Extensive empirical results, on both simulated and
publicly available real datasets, show that the proposed unbiased Gram matrix
can significantly improve a broad spectrum of representative dimension
reduction approaches.
|
Machine Learning, Machine Learning, Applications
|
Statistics
|
2402.10456
|
Wenhui Sophia Lu
|
Generative Modeling for Tabular Data via Penalized Optimal Transport
Network
|
stat.ML cs.LG stat.AP stat.ME
|
The task of precisely learning the probability distribution of rows within
tabular data and producing authentic synthetic samples is both crucial and
non-trivial. Wasserstein generative adversarial network (WGAN) marks a notable
improvement in generative modeling, addressing the challenges faced by its
predecessor, generative adversarial network. However, due to the mixed data
types and multimodalities prevalent in tabular data, the delicate equilibrium
between the generator and discriminator, as well as the inherent instability of
Wasserstein distance in high dimensions, WGAN often fails to produce
high-fidelity samples. To this end, we propose POTNet (Penalized Optimal
Transport Network), a generative deep neural network based on a novel, robust,
and interpretable marginally-penalized Wasserstein (MPW) loss. POTNet can
effectively model tabular data containing both categorical and continuous
features. Moreover, it offers the flexibility to condition on a subset of
features. We provide theoretical justifications for the motivation behind the
MPW loss. We also empirically demonstrate the effectiveness of our proposed
method on four different benchmarks across a variety of real-world and
simulated datasets. Our proposed model achieves orders of magnitude speedup
during the sampling stage compared to state-of-the-art generative models for
tabular data, thereby enabling efficient large-scale synthetic data generation.
|
Machine Learning, Machine Learning, Applications, Methodology
|
Statistics
|
2207.10673
|
Sim\'on Rodr\'iguez Santana
|
Correcting Model Bias with Sparse Implicit Processes
|
stat.ML cs.LG stat.CO
|
Model selection in machine learning (ML) is a crucial part of the Bayesian
learning procedure. Model choice may impose strong biases on the resulting
predictions, which can hinder the performance of methods such as Bayesian
neural networks and neural samplers. On the other hand, newly proposed
approaches for Bayesian ML exploit features of approximate inference in
function space with implicit stochastic processes (a generalization of Gaussian
processes). The approach of Sparse Implicit Processes (SIP) is particularly
successful in this regard, since it is fully trainable and achieves flexible
predictions. Here, we expand on the original experiments to show that SIP is
capable of correcting model bias when the data generating mechanism differs
strongly from the one implied by the model. We use synthetic datasets to show
that SIP is capable of providing predictive distributions that reflect the data
better than the exact predictions of the initial, but wrongly assumed model.
|
Machine Learning, Machine Learning, Computation
|
Statistics
|
1905.13285
|
Niladri Chatterji
|
Langevin Monte Carlo without smoothness
|
stat.ML cs.LG stat.CO
|
Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples
from a distribution that is known only up to a normalizing constant. The
nonasymptotic dependence of its mixing time on the dimension and target
accuracy is understood mainly in the setting of smooth (gradient-Lipschitz)
log-densities, a serious limitation for applications in machine learning. In
this paper, we remove this limitation, providing polynomial-time convergence
guarantees for a variant of LMC in the setting of nonsmooth log-concave
distributions. At a high level, our results follow by leveraging the implicit
smoothing of the log-density that comes from a small Gaussian perturbation that
we add to the iterates of the algorithm and controlling the bias and variance
that are induced by this perturbation.
|
Machine Learning, Machine Learning, Computation
|
Statistics
|
2010.08729
|
Tsuyoshi Ishizone
|
Ensemble Kalman Variational Objectives: Nonlinear Latent Trajectory
Inference with A Hybrid of Variational Inference and Ensemble Kalman Filter
|
stat.ML cs.LG stat.CO
|
Variational inference (VI) combined with Bayesian nonlinear filtering
produces state-of-the-art results for latent time-series modeling. A body of
recent work has focused on sequential Monte Carlo (SMC) and its variants, e.g.,
forward filtering backward simulation (FFBSi). Although these studies have
succeeded, serious problems remain in particle degeneracy and biased gradient
estimators. In this paper, we propose Ensemble Kalman Variational Objective
(EnKO), a hybrid method of VI and the ensemble Kalman filter (EnKF), to infer
state space models (SSMs). Our proposed method can efficiently identify latent
dynamics because of its particle diversity and unbiased gradient estimators. We
demonstrate that our EnKO outperforms SMC-based methods in terms of predictive
ability and particle efficiency for three benchmark nonlinear system
identification tasks.
|
Machine Learning, Machine Learning, Computation
|
Statistics
|
2204.00296
|
Geoff Nicholls
|
Scalable Semi-Modular Inference with Variational Meta-Posteriors
|
stat.ML cs.LG stat.CO
|
The Cut posterior and related Semi-Modular Inference are Generalised Bayes
methods for Modular Bayesian evidence combination. Analysis is broken up over
modular sub-models of the joint posterior distribution. Model-misspecification
in multi-modular models can be hard to fix by model elaboration alone and the
Cut posterior and SMI offer a way round this. Information entering the analysis
from misspecified modules is controlled by an influence parameter $\eta$
related to the learning rate. This paper contains two substantial new methods.
First, we give variational methods for approximating the Cut and SMI posteriors
which are adapted to the inferential goals of evidence combination. We
parameterise a family of variational posteriors using a Normalising Flow for
accurate approximation and end-to-end training. Secondly, we show that analysis
of models with multiple cuts is feasible using a new Variational
Meta-Posterior. This approximates a family of SMI posteriors indexed by $\eta$
using a single set of variational parameters.
|
Machine Learning, Machine Learning, Computation
|
Statistics
|
2006.03005
|
Gherardo Varando
|
Learning DAGs without imposing acyclicity
|
stat.ML cs.LG stat.CO
|
We explore if it is possible to learn a directed acyclic graph (DAG) from
data without imposing explicitly the acyclicity constraint. In particular, for
Gaussian distributions, we frame structural learning as a sparse matrix
factorization problem and we empirically show that solving an
$\ell_1$-penalized optimization yields to good recovery of the true graph and,
in general, to almost-DAG graphs. Moreover, this approach is computationally
efficient and is not affected by the explosion of combinatorial complexity as
in classical structural learning algorithms.
|
Machine Learning, Machine Learning, Computation
|
Statistics
|
2312.12357
|
Edoardo Filippi-Mazzola
|
Modeling non-linear Effects with Neural Networks in Relational Event
Models
|
stat.ML cs.LG stat.CO stat.ME
|
Dynamic networks offer an insight of how relational systems evolve. However,
modeling these networks efficiently remains a challenge, primarily due to
computational constraints, especially as the number of observed events grows.
This paper addresses this issue by introducing the Deep Relational Event
Additive Model (DREAM) as a solution to the computational challenges presented
by modeling non-linear effects in Relational Event Models (REMs). DREAM relies
on Neural Additive Models to model non-linear effects, allowing each effect to
be captured by an independent neural network. By strategically trading
computational complexity for improved memory management and leveraging the
computational capabilities of Graphic Processor Units (GPUs), DREAM efficiently
captures complex non-linear relationships within data. This approach
demonstrates the capability of DREAM in modeling dynamic networks and scaling
to larger networks. Comparisons with traditional REM approaches showcase DREAM
superior computational efficiency. The model potential is further demonstrated
by an examination of the patent citation network, which contains nearly 8
million nodes and 100 million events.
|
Machine Learning, Machine Learning, Computation, Methodology
|
Statistics
|
2309.06230
|
Borui Tang
|
A Consistent and Scalable Algorithm for Best Subset Selection in Single
Index Models
|
stat.ML cs.LG stat.CO stat.ME
|
Analysis of high-dimensional data has led to increased interest in both
single index models (SIMs) and best subset selection. SIMs provide an
interpretable and flexible modeling framework for high-dimensional data, while
best subset selection aims to find a sparse model from a large set of
predictors. However, best subset selection in high-dimensional models is known
to be computationally intractable. Existing methods tend to relax the
selection, but do not yield the best subset solution. In this paper, we
directly tackle the intractability by proposing the first provably scalable
algorithm for best subset selection in high-dimensional SIMs. Our algorithmic
solution enjoys the subset selection consistency and has the oracle property
with a high probability. The algorithm comprises a generalized information
criterion to determine the support size of the regression coefficients,
eliminating the model selection tuning. Moreover, our method does not assume an
error distribution or a specific link function and hence is flexible to apply.
Extensive simulation results demonstrate that our method is not only
computationally efficient but also able to exactly recover the best subset in
various settings (e.g., linear regression, Poisson regression, heteroscedastic
models).
|
Machine Learning, Machine Learning, Computation, Methodology
|
Statistics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.