paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/cost-aware-learning-for-improved
|
1802.04350
| null | null |
Cost-Aware Learning for Improved Identifiability with Multiple Experiments
|
We analyze the sample complexity of learning from multiple experiments where the experimenter has a total budget for obtaining samples. In this problem, the learner should choose a hypothesis that performs well with respect to multiple experiments, and their related data distributions. Each collected sample is associated with a cost which depends on the particular experiments. In our setup, a learner performs $m$ experiments, while incurring a total cost $C$. We first show that learning from multiple experiments allows to improve identifiability. Additionally, by using a Rademacher complexity approach, we show that the gap between the training and generalization error is $O(C^{-1/2})$. We also provide some examples for linear prediction, two-layer neural networks and kernel methods.
| null |
https://arxiv.org/abs/1802.04350v5
|
https://arxiv.org/pdf/1802.04350v5.pdf
| null |
[
"Longyun Guo",
"Jean Honorio",
"John Morgan"
] |
[] | 2018-02-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/large-scale-neuromorphic-spiking-array
|
1805.08932
| null | null |
Large-Scale Neuromorphic Spiking Array Processors: A quest to mimic the brain
|
Neuromorphic engineering (NE) encompasses a diverse range of approaches to
information processing that are inspired by neurobiological systems, and this
feature distinguishes neuromorphic systems from conventional computing systems.
The brain has evolved over billions of years to solve difficult engineering
problems by using efficient, parallel, low-power computation. The goal of NE is
to design systems capable of brain-like computation. Numerous large-scale
neuromorphic projects have emerged recently. This interdisciplinary field was
listed among the top 10 technology breakthroughs of 2014 by the MIT Technology
Review and among the top 10 emerging technologies of 2015 by the World Economic
Forum. NE has two-way goals: one, a scientific goal to understand the
computational properties of biological neural systems by using models
implemented in integrated circuits (ICs); second, an engineering goal to
exploit the known properties of biological systems to design and implement
efficient devices for engineering applications. Building hardware neural
emulators can be extremely useful for simulating large-scale neural models to
explain how intelligent behavior arises in the brain. The principle advantages
of neuromorphic emulators are that they are highly energy efficient, parallel
and distributed, and require a small silicon area. Thus, compared to
conventional CPUs, these neuromorphic emulators are beneficial in many
engineering applications such as for the porting of deep learning algorithms
for various recognitions tasks. In this review article, we describe some of the
most significant neuromorphic spiking emulators, compare the different
architectures and approaches used by them, illustrate their advantages and
drawbacks, and highlight the capabilities that each can deliver to neural
modelers.
| null |
http://arxiv.org/abs/1805.08932v1
|
http://arxiv.org/pdf/1805.08932v1.pdf
| null |
[
"Chetan Singh Thakur",
"Jamal Molin",
"Gert Cauwenberghs",
"Giacomo Indiveri",
"Kundan Kumar",
"Ning Qiao",
"Johannes Schemmel",
"Runchun Wang",
"Elisabetta Chicca",
"Jennifer Olson Hasler",
"Jae-sun Seo",
"Shimeng Yu",
"Yu Cao",
"André van Schaik",
"Ralph Etienne-Cummings"
] |
[] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analysis-of-thompson-sampling-for-graphical
|
1805.08930
| null | null |
Analysis of Thompson Sampling for Graphical Bandits Without the Graphs
|
We study multi-armed bandit problems with graph feedback, in which the
decision maker is allowed to observe the neighboring actions of the chosen
action, in a setting where the graph may vary over time and is never fully
revealed to the decision maker. We show that when the feedback graphs are
undirected, the original Thompson Sampling achieves the optimal (within
logarithmic factors) regret $\tilde{O}\left(\sqrt{\beta_0(G)T}\right)$ over
time horizon $T$, where $\beta_0(G)$ is the average independence number of the
latent graphs. To the best of our knowledge, this is the first result showing
that the original Thompson Sampling is optimal for graphical bandits in the
undirected setting. A slightly weaker regret bound of Thompson Sampling in the
directed setting is also presented. To fill this gap, we propose a variant of
Thompson Sampling, that attains the optimal regret in the directed setting
within a logarithmic factor. Both algorithms can be implemented efficiently and
do not require the knowledge of the feedback graphs at any time.
| null |
http://arxiv.org/abs/1805.08930v1
|
http://arxiv.org/pdf/1805.08930v1.pdf
| null |
[
"Fang Liu",
"Zizhan Zheng",
"Ness Shroff"
] |
[
"Thompson Sampling"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/meta-learning-update-rules-for-unsupervised
|
1804.00222
| null | null |
Meta-Learning Update Rules for Unsupervised Representation Learning
|
A major goal of unsupervised learning is to discover data representations
that are useful for subsequent tasks, without access to supervised labels
during training. Typically, this involves minimizing a surrogate objective,
such as the negative log likelihood of a generative model, with the hope that
representations useful for subsequent tasks will arise as a side effect. In
this work, we propose instead to directly target later desired tasks by
meta-learning an unsupervised learning rule which leads to representations
useful for those tasks. Specifically, we target semi-supervised classification
performance, and we meta-learn an algorithm -- an unsupervised weight update
rule -- that produces representations useful for this task. Additionally, we
constrain our unsupervised update rule to a be a biologically-motivated,
neuron-local function, which enables it to generalize to different neural
network architectures, datasets, and data modalities. We show that the
meta-learned update rule produces useful features and sometimes outperforms
existing unsupervised learning techniques. We further show that the
meta-learned unsupervised update rule generalizes to train networks with
different widths, depths, and nonlinearities. It also generalizes to train on
data with randomly permuted input dimensions and even generalizes from image
datasets to a text task.
|
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
|
http://arxiv.org/abs/1804.00222v3
|
http://arxiv.org/pdf/1804.00222v3.pdf
|
ICLR 2019 5
|
[
"Luke Metz",
"Niru Maheswaranathan",
"Brian Cheung",
"Jascha Sohl-Dickstein"
] |
[
"Meta-Learning",
"Representation Learning"
] | 2018-03-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/approximate-newton-based-statistical
|
1805.08920
| null | null |
Approximate Newton-based statistical inference using only stochastic gradients
|
We present a novel statistical inference framework for convex empirical risk
minimization, using approximate stochastic Newton steps. The proposed algorithm
is based on the notion of finite differences and allows the approximation of a
Hessian-vector product from first-order information. In theory, our method
efficiently computes the statistical error covariance in $M$-estimation, both
for unregularized convex learning problems and high-dimensional LASSO
regression, without using exact second order information, or resampling the
entire data set. We also present a stochastic gradient sampling scheme for
statistical inference in non-i.i.d. time series analysis, where we sample
contiguous blocks of indices. In practice, we demonstrate the effectiveness of
our framework on large-scale machine learning problems, that go even beyond
convexity: as a highlight, our work can be used to detect certain adversarial
attacks on neural networks.
| null |
http://arxiv.org/abs/1805.08920v2
|
http://arxiv.org/pdf/1805.08920v2.pdf
| null |
[
"Tianyang Li",
"Anastasios Kyrillidis",
"Liu Liu",
"Constantine Caramanis"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/effective-dimension-of-exp-concave
|
1805.08268
| null | null |
Optimal Sketching Bounds for Exp-concave Stochastic Minimization
|
We derive optimal statistical and computational complexity bounds for exp-concave stochastic minimization in terms of the effective dimension. For common eigendecay patterns of the population covariance matrix, this quantity is significantly smaller than the ambient dimension. Our results reveal interesting connections to sketching results in numerical linear algebra. In particular, our statistical analysis highlights a novel and natural relationship between algorithmic stability of empirical risk minimization and ridge leverage scores, which play significant role in sketching-based methods. Our main computational result is a fast implementation of a sketch-to-precondition approach in the context of exp-concave empirical risk minimization.
| null |
https://arxiv.org/abs/1805.08268v7
|
https://arxiv.org/pdf/1805.08268v7.pdf
| null |
[
"Naman Agarwal",
"Alon Gonen"
] |
[] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distribution-aware-active-learning
|
1805.08916
| null | null |
Distribution Aware Active Learning
|
Discriminative learning machines often need a large set of labeled samples
for training. Active learning (AL) settings assume that the learner has the
freedom to ask an oracle to label its desired samples. Traditional AL
algorithms heuristically choose query samples about which the current learner
is uncertain. This strategy does not make good use of the structure of the
dataset at hand and is prone to be misguided by outliers. To alleviate this
problem, we propose to distill the structural information into a probabilistic
generative model which acts as a \emph{teacher} in our model. The active
\emph{learner} uses this information effectively at each cycle of active
learning. The proposed method is generic and does not depend on the type of
learner and teacher. We then suggest a query criterion for active learning that
is aware of distribution of data and is more robust against outliers. Our
method can be combined readily with several other query criteria for active
learning. We provide the formulation and empirically show our idea via toy and
real examples.
| null |
http://arxiv.org/abs/1805.08916v1
|
http://arxiv.org/pdf/1805.08916v1.pdf
| null |
[
"Arash Mehrjou",
"Mehran Khodabandeh",
"Greg Mori"
] |
[
"Active Learning"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scene-coordinate-and-correspondence-learning
|
1805.08443
| null | null |
Scene Coordinate and Correspondence Learning for Image-Based Localization
|
Scene coordinate regression has become an essential part of current camera
re-localization methods. Different versions, such as regression forests and
deep learning methods, have been successfully applied to estimate the
corresponding camera pose given a single input image. In this work, we propose
to regress the scene coordinates pixel-wise for a given RGB image by using deep
learning. Compared to the recent methods, which usually employ RANSAC to obtain
a robust pose estimate from the established point correspondences, we propose
to regress confidences of these correspondences, which allows us to immediately
discard erroneous predictions and improve the initial pose estimates. Finally,
the resulting confidences can be used to score initial pose hypothesis and aid
in pose refinement, offering a generalized solution to solve this task.
| null |
http://arxiv.org/abs/1805.08443v4
|
http://arxiv.org/pdf/1805.08443v4.pdf
| null |
[
"Mai Bui",
"Shadi Albarqouni",
"Slobodan Ilic",
"Nassir Navab"
] |
[
"Deep Learning",
"Image-Based Localization",
"regression"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-monte-carlo-optimization
|
1805.08321
| null | null |
Bandit-Based Monte Carlo Optimization for Nearest Neighbors
|
The celebrated Monte Carlo method estimates an expensive-to-compute quantity by random sampling. Bandit-based Monte Carlo optimization is a general technique for computing the minimum of many such expensive-to-compute quantities by adaptive random sampling. The technique converts an optimization problem into a statistical estimation problem which is then solved via multi-armed bandits. We apply this technique to solve the problem of high-dimensional $k$-nearest neighbors, developing an algorithm which we prove is able to identify exact nearest neighbors with high probability. We show that under regularity assumptions on a dataset of $n$ points in $d$-dimensional space, the complexity of our algorithm scales logarithmically with the dimension of the data as $O\left((n+d)\log^2 \left(\frac{nd}{\delta}\right)\right)$ for error probability $\delta$, rather than linearly as in exact computation requiring $O(nd)$. We corroborate our theoretical results with numerical simulations, showing that our algorithm outperforms both exact computation and state-of-the-art algorithms such as kGraph, NGT, and LSH on real datasets.
|
The celebrated Monte Carlo method estimates an expensive-to-compute quantity by random sampling.
|
https://arxiv.org/abs/1805.08321v4
|
https://arxiv.org/pdf/1805.08321v4.pdf
| null |
[
"Vivek Bagaria",
"Tavor Z. Baharav",
"Govinda M. Kamath",
"David N. Tse"
] |
[
"Clustering",
"Multi-Armed Bandits"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/elastic-registration-of-medical-images-with
|
1805.02369
| null | null |
GAN Based Medical Image Registration
|
Conventional approaches to image registration consist of time consuming iterative methods. Most current deep learning (DL) based registration methods extract deep features to use in an iterative setting. We propose an end-to-end DL method for registering multimodal images. Our approach uses generative adversarial networks (GANs) that eliminates the need for time consuming iterative methods, and directly generates the registered image with the deformation field. Appropriate constraints in the GAN cost function produce accurately registered images in less than a second. Experiments demonstrate their accuracy for multimodal retinal and cardiac MR image registration.
|
Conventional approaches to image registration consist of time consuming iterative methods.
|
https://arxiv.org/abs/1805.02369v4
|
https://arxiv.org/pdf/1805.02369v4.pdf
| null |
[
"Dwarikanath Mahapatra"
] |
[
"Image Registration",
"Medical Image Registration"
] | 2018-05-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/a-psychopathological-approach-to-safety
|
1805.08915
| null | null |
A Psychopathological Approach to Safety Engineering in AI and AGI
|
The complexity of dynamics in AI techniques is already approaching that of
complex adaptive systems, thus curtailing the feasibility of formal
controllability and reachability analysis in the context of AI safety. It
follows that the envisioned instances of Artificial General Intelligence (AGI)
will also suffer from challenges of complexity. To tackle such issues, we
propose the modeling of deleterious behaviors in AI and AGI as psychological
disorders, thereby enabling the employment of psychopathological approaches to
analysis and control of misbehaviors. Accordingly, we present a discussion on
the feasibility of the psychopathological approaches to AI safety, and propose
general directions for research on modeling, diagnosis, and treatment of
psychological disorders in AGI.
| null |
http://arxiv.org/abs/1805.08915v1
|
http://arxiv.org/pdf/1805.08915v1.pdf
| null |
[
"Vahid Behzadan",
"Arslan Munir",
"Roman V. Yampolskiy"
] |
[] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-chinese-intent-classification-by
|
1805.08914
| null | null |
Enhancing Chinese Intent Classification by Dynamically Integrating Character Features into Word Embeddings with Ensemble Techniques
|
Intent classification has been widely researched on English data with deep
learning approaches that are based on neural networks and word embeddings. The
challenge for Chinese intent classification stems from the fact that, unlike
English where most words are made up of 26 phonologic alphabet letters, Chinese
is logographic, where a Chinese character is a more basic semantic unit that
can be informative and its meaning does not vary too much in contexts. Chinese
word embeddings alone can be inadequate for representing words, and pre-trained
embeddings can suffer from not aligning well with the task at hand. To account
for the inadequacy and leverage Chinese character information, we propose a
low-effort and generic way to dynamically integrate character embedding based
feature maps with word embedding based inputs, whose resulting word-character
embeddings are stacked with a contextual information extraction module to
further incorporate context information for predictions. On top of the proposed
model, we employ an ensemble method to combine single models and obtain the
final result. The approach is data-independent without relying on external
sources like pre-trained word embeddings. The proposed model outperforms
baseline models and existing methods.
| null |
http://arxiv.org/abs/1805.08914v1
|
http://arxiv.org/pdf/1805.08914v1.pdf
| null |
[
"Ruixi Lin",
"Charles Costello",
"Charles Jankowski"
] |
[
"General Classification",
"intent-classification",
"Intent Classification",
"Word Embeddings"
] | 2018-05-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/amortized-inference-regularization
|
1805.08913
| null | null |
Amortized Inference Regularization
|
The variational autoencoder (VAE) is a popular model for density estimation
and representation learning. Canonically, the variational principle suggests to
prefer an expressive inference model so that the variational approximation is
accurate. However, it is often overlooked that an overly-expressive inference
model can be detrimental to the test set performance of both the amortized
posterior approximator and, more importantly, the generative density estimator.
In this paper, we leverage the fact that VAEs rely on amortized inference and
propose techniques for amortized inference regularization (AIR) that control
the smoothness of the inference model. We demonstrate that, by applying AIR, it
is possible to improve VAE generalization on both inference and generative
performance. Our paper challenges the belief that amortized inference is simply
a mechanism for approximating maximum likelihood training and illustrates that
regularization of the amortization family provides a new direction for
understanding and improving generalization in VAEs.
| null |
http://arxiv.org/abs/1805.08913v2
|
http://arxiv.org/pdf/1805.08913v2.pdf
|
NeurIPS 2018 12
|
[
"Rui Shu",
"Hung H. Bui",
"Shengjia Zhao",
"Mykel J. Kochenderfer",
"Stefano Ermon"
] |
[
"Density Estimation",
"Representation Learning"
] | 2018-05-23T00:00:00 |
http://papers.nips.cc/paper/7692-amortized-inference-regularization
|
http://papers.nips.cc/paper/7692-amortized-inference-regularization.pdf
|
amortized-inference-regularization-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/affinitynet-semi-supervised-few-shot-learning
|
1805.08905
| null | null |
AffinityNet: semi-supervised few-shot learning for disease type prediction
|
While deep learning has achieved great success in computer vision and many
other fields, currently it does not work very well on patient genomic data with
the "big p, small N" problem (i.e., a relatively small number of samples with
high-dimensional features). In order to make deep learning work with a small
amount of training data, we have to design new models that facilitate few-shot
learning. Here we present the Affinity Network Model (AffinityNet), a data
efficient deep learning model that can learn from a limited number of training
examples and generalize well. The backbone of the AffinityNet model consists of
stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention
pooling layer is a generalization of the Graph Attention Model (GAM), and can
be applied to not only graphs but also any set of objects regardless of whether
a graph is given or not. As a new deep learning module, kNN attention pooling
layers can be plugged into any neural network model just like convolutional
layers. As a simple special case of kNN attention pooling layer, feature
attention layer can directly select important features that are useful for
classification tasks. Experiments on both synthetic data and cancer genomic
data from TCGA projects show that our AffinityNet model has better
generalization power than conventional neural network models with little
training data. The code is freely available at
https://github.com/BeautyOfWeb/AffinityNet .
|
The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not.
|
http://arxiv.org/abs/1805.08905v2
|
http://arxiv.org/pdf/1805.08905v2.pdf
| null |
[
"Tianle Ma",
"Aidong Zhang"
] |
[
"Deep Learning",
"Few-Shot Learning",
"Graph Attention",
"Type prediction",
"Vocal Bursts Type Prediction"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/non-oscillatory-pattern-learning-for-non
|
1805.08102
| null | null |
PiPs: a Kernel-based Optimization Scheme for Analyzing Non-Stationary 1D Signals
|
This paper proposes a novel kernel-based optimization scheme to handle tasks in the analysis, e.g., signal spectral estimation and single-channel source separation of 1D non-stationary oscillatory data. The key insight of our optimization scheme for reconstructing the time-frequency information is that when a nonparametric regression is applied on some input values, the output regressed points would lie near the oscillatory pattern of the oscillatory 1D signal only if these input values are a good approximation of the ground-truth phase function. In this work, Gaussian Process (GP) is chosen to conduct this nonparametric regression: the oscillatory pattern is encoded as the Pattern-inducing Points (PiPs) which act as the training data points in the GP regression; while the targeted phase function is fed in to compute the correlation kernels, acting as the testing input. Better approximated phase function generates more precise kernels, thus resulting in smaller optimization loss error when comparing the kernel-based regression output with the original signals. To the best of our knowledge, this is the first algorithm that can satisfactorily handle fully non-stationary oscillatory data, close and crossover frequencies, and general oscillatory patterns. Even in the example of a signal {produced by slow variation in the parameters of a trigonometric expansion}, we show that PiPs admits competitive or better performance in terms of accuracy and robustness than existing state-of-the-art algorithms.
| null |
https://arxiv.org/abs/1805.08102v3
|
https://arxiv.org/pdf/1805.08102v3.pdf
| null |
[
"Jieren Xu",
"Yitong Li",
"Haizhao Yang",
"David Dunson",
"Ingrid Daubechies"
] |
[
"regression",
"Super-Resolution"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ecornn-efficient-computing-of-lstm-rnn
|
1805.08899
| null | null |
Echo: Compiler-based GPU Memory Footprint Reduction for LSTM RNN Training
|
The Long-Short-Term-Memory Recurrent Neural Networks (LSTM RNNs) are a popular class of machine learning models for analyzing sequential data. Their training on modern GPUs, however, is limited by the GPU memory capacity. Our profiling results of the LSTM RNN-based Neural Machine Translation (NMT) model reveal that feature maps of the attention and RNN layers form the memory bottleneck and runtime is unevenly distributed across different layers when training on GPUs. Based on these two observations, we propose to recompute the feature maps rather than stashing them persistently in the GPU memory. While the idea of feature map recomputation has been considered before, existing solutions fail to deliver satisfactory footprint reduction, as they do not address two key challenges. For each feature map recomputation to be effective and efficient, its effect on (1) the total memory footprint, and (2) the total execution time has to be carefully estimated. To this end, we propose *Echo*, a new compiler-based optimization scheme that addresses the first challenge with a practical mechanism that estimates the memory benefits of recomputation over the entire computation graph, and the second challenge by non-conservatively estimating the recomputation overhead leveraging layer specifics. *Echo* reduces the GPU memory footprint automatically and transparently without any changes required to the training source code, and is effective for models beyond LSTM RNNs. We evaluate *Echo* on numerous state-of-the-art machine learning workloads on real systems with modern GPUs and observe footprint reduction ratios of 1.89X on average and 3.13X maximum. Such reduction can be converted into faster training with a larger batch size, savings in GPU energy consumption (e.g., training with one GPU as fast as with four), and/or an increase in the maximum number of layers under the same GPU memory budget.
| null |
https://arxiv.org/abs/1805.08899v5
|
https://arxiv.org/pdf/1805.08899v5.pdf
| null |
[
"Bojian Zheng",
"Abhishek Tiwari",
"Nandita Vijaykumar",
"Gennady Pekhimenko"
] |
[
"GPU",
"Machine Translation",
"NMT"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/teachers-perception-in-the-classroom
|
1805.08897
| null | null |
Teacher's Perception in the Classroom
|
The ability for a teacher to engage all students in active learning processes
in classroom constitutes a crucial prerequisite for enhancing students
achievement. Teachers' attentional processes provide important insights into
teachers' ability to focus their attention on relevant information in the
complexity of classroom interaction and distribute their attention across
students in order to recognize the relevant needs for learning. In this
context, mobile eye tracking is an innovative approach within teaching
effectiveness research to capture teachers' attentional processes while
teaching. However, analyzing mobile eye-tracking data by hand is time consuming
and still limited. In this paper, we introduce a new approach to enhance the
impact of mobile eye tracking by connecting it with computer vision. In mobile
eye tracking videos from an educational study using a standardized small group
situation, we apply a state-ofthe-art face detector, create face tracklets, and
introduce a novel method to cluster faces into the number of identity.
Subsequently, teachers' attentional focus is calculated per student during a
teaching unit by associating eye tracking fixations and face tracklets. To the
best of our knowledge, this is the first work to combine computer vision and
mobile eye tracking to model teachers' attention while instructing.
| null |
http://arxiv.org/abs/1805.08897v1
|
http://arxiv.org/pdf/1805.08897v1.pdf
| null |
[
"Ömer Sümer",
"Patricia Goldberg",
"Kathleen Stürmer",
"Tina Seidel",
"Peter Gerjets",
"Ulrich Trautwein",
"Enkelejda Kasneci"
] |
[
"Active Learning"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/step-size-matters-in-deep-learning
|
1805.08890
| null | null |
Step Size Matters in Deep Learning
|
Training a neural network with the gradient descent algorithm gives rise to a
discrete-time nonlinear dynamical system. Consequently, behaviors that are
typically observed in these systems emerge during training, such as convergence
to an orbit but not to a fixed point or dependence of convergence on the
initialization. Step size of the algorithm plays a critical role in these
behaviors: it determines the subset of the local optima that the algorithm can
converge to, and it specifies the magnitude of the oscillations if the
algorithm converges to an orbit. To elucidate the effects of the step size on
training of neural networks, we study the gradient descent algorithm as a
discrete-time dynamical system, and by analyzing the Lyapunov stability of
different solutions, we show the relationship between the step size of the
algorithm and the solutions that can be obtained with this algorithm. The
results provide an explanation for several phenomena observed in practice,
including the deterioration in the training error with increased depth, the
hardness of estimating linear mappings with large singular values, and the
distinct performance of deep residual networks.
|
To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm.
|
http://arxiv.org/abs/1805.08890v2
|
http://arxiv.org/pdf/1805.08890v2.pdf
|
NeurIPS 2018 12
|
[
"Kamil Nar",
"S. Shankar Sastry"
] |
[
"Deep Learning"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7603-step-size-matters-in-deep-learning
|
http://papers.nips.cc/paper/7603-step-size-matters-in-deep-learning.pdf
|
step-size-matters-in-deep-learning-1
| null |
[] |
https://paperswithcode.com/paper/multi-task-maximum-entropy-inverse
|
1805.08882
| null | null |
Multi-task Maximum Entropy Inverse Reinforcement Learning
|
Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferring
multiple reward functions from expert demonstrations. Prior work, built on
Bayesian IRL, is unable to scale to complex environments due to computational
constraints. This paper contributes a formulation of multi-task IRL in the more
computationally efficient Maximum Causal Entropy (MCE) IRL framework.
Experiments show our approach can perform one-shot imitation learning in a
gridworld environment that single-task IRL algorithms need hundreds of
demonstrations to solve. We outline preliminary work using meta-learning to
extend our method to the function approximator setting of modern MCE IRL
algorithms. Evaluating on multi-task variants of common simulated robotics
benchmarks, we discover serious limitations of these IRL algorithms, and
conclude with suggestions for further work.
|
Multi-task Inverse Reinforcement Learning (IRL) is the problem of inferring multiple reward functions from expert demonstrations.
|
http://arxiv.org/abs/1805.08882v2
|
http://arxiv.org/pdf/1805.08882v2.pdf
| null |
[
"Adam Gleave",
"Oliver Habryka"
] |
[
"Imitation Learning",
"Meta-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/water-from-two-rocks-maximizing-the-mutual
|
1802.08887
| null | null |
Water from Two Rocks: Maximizing the Mutual Information
|
We build a natural connection between the learning problem, co-training, and
forecast elicitation without verification (related to peer-prediction) and
address them simultaneously using the same information theoretic approach.
In co-training/multiview learning, the goal is to aggregate two views of data
into a prediction for a latent label. We show how to optimally combine two
views of data by reducing the problem to an optimization problem. Our work
gives a unified and rigorous approach to the general setting.
In forecast elicitation without verification we seek to design a mechanism
that elicits high quality forecasts from agents in the setting where the
mechanism does not have access to the ground truth. By assuming the agents'
information is independent conditioning on the outcome, we propose mechanisms
where truth-telling is a strict equilibrium for both the single-task and
multi-task settings. Our multi-task mechanism additionally has the property
that the truth-telling equilibrium pays better than any other strategy profile
and strictly better than any other "non-permutation" strategy profile when the
prior satisfies some mild conditions.
| null |
http://arxiv.org/abs/1802.08887v3
|
http://arxiv.org/pdf/1802.08887v3.pdf
| null |
[
"Yuqing Kong",
"Grant Schoenebeck"
] |
[
"Multiview Learning",
"Vocal Bursts Valence Prediction"
] | 2018-02-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/aria-utilizing-richards-curve-for-controlling
|
1805.08878
| null | null |
ARiA: Utilizing Richard's Curve for Controlling the Non-monotonicity of the Activation Function in Deep Neural Nets
|
This work introduces a novel activation unit that can be efficiently employed
in deep neural nets (DNNs) and performs significantly better than the
traditional Rectified Linear Units (ReLU). The function developed is a two
parameter version of the specialized Richard's Curve and we call it Adaptive
Richard's Curve weighted Activation (ARiA). This function is non-monotonous,
analogous to the newly introduced Swish, however allows a precise control over
its non-monotonous convexity by varying the hyper-parameters. We first
demonstrate the mathematical significance of the two parameter ARiA followed by
its application to benchmark problems such as MNIST, CIFAR-10 and CIFAR-100,
where we compare the performance with ReLU and Swish units. Our results
illustrate a significantly superior performance on all these datasets, making
ARiA a potential replacement for ReLU and other activations in DNNs.
| null |
http://arxiv.org/abs/1805.08878v1
|
http://arxiv.org/pdf/1805.08878v1.pdf
| null |
[
"Narendra Patwardhan",
"Madhura Ingalhalikar",
"Rahee Walambe"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "This work introduces a novel activation unit that can be efficiently employed in deep neural nets (DNNs) and performs significantly better than the traditional Rectified Linear Units ([ReLU](https://paperswithcode.com/method/relu)). The function developed is a two parameter version of the specialized Richard's Curve and we call it Adaptive Richard's Curve weighted Activation (ARiA). This function is non-monotonous, analogous to the newly introduced [Swish](https://paperswithcode.com/method/swish), however allows a precise control over its non-monotonous convexity by varying the hyper-parameters. We first demonstrate the mathematical significance of the two parameter ARiA followed by its application to benchmark problems such as MNIST, CIFAR-10 and CIFAR-100, where we compare the performance with ReLU and Swish units. Our results illustrate a significantly superior performance on all these datasets, making ARiA a potential replacement for ReLU and other activations in DNNs.",
"full_name": "Adaptive Richard's Curve Weighted Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ARiA",
"source_title": "ARiA: Utilizing Richard's Curve for Controlling the Non-monotonicity of the Activation Function in Deep Neural Nets",
"source_url": "http://arxiv.org/abs/1805.08878v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I file a claim against Expedia?\r\nHow Do I File a Claim Against Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Fast Help & Exclusive Travel Discounts!Need to file a claim with Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for expert assistance and unlock exclusive best deal offers on hotels, flights, and vacation packages. Get fast resolution on your travel issues while enjoying limited-time discounts that make your next trip smoother, more affordable, and stress-free. Call today—don’t miss out!\r\n\r\n\r\nHow do I file a claim against Expedia?\r\nHow Do I File a Claim Against Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Fast Help & Exclusive Travel Discounts!Need to file a claim with Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for expert assistance and unlock exclusive best deal offers on hotels, flights, and vacation packages. Get fast resolution on your travel issues while enjoying limited-time discounts that make your next trip smoother, more affordable, and stress-free. Call today—don’t miss out!",
"full_name": "(FiLe@Against@Claim)How do I file a claim against Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "(FiLe@Against@Claim)How do I file a claim against Expedia?",
"source_title": "Searching for Activation Functions",
"source_url": "http://arxiv.org/abs/1710.05941v2"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/unsupervised-domain-adaptation-using-2
|
1805.08874
| null | null |
Unsupervised Domain Adaptation using Regularized Hyper-graph Matching
|
Domain adaptation (DA) addresses the real-world image classification problem
of discrepancy between training (source) and testing (target) data
distributions. We propose an unsupervised DA method that considers the presence
of only unlabelled data in the target domain. Our approach centers on finding
matches between samples of the source and target domains. The matches are
obtained by treating the source and target domains as hyper-graphs and carrying
out a class-regularized hyper-graph matching using first-, second- and
third-order similarities between the graphs. We have also developed a
computationally efficient algorithm by initially selecting a subset of the
samples to construct a graph and then developing a customized optimization
routine for graph-matching based on Conditional Gradient and Alternating
Direction Multiplier Method. This allows the proposed method to be used widely.
We also performed a set of experiments on standard object recognition datasets
to validate the effectiveness of our framework over state-of-the-art
approaches.
| null |
http://arxiv.org/abs/1805.08874v2
|
http://arxiv.org/pdf/1805.08874v2.pdf
| null |
[
"Debasmit Das",
"C. S. George Lee"
] |
[
"Domain Adaptation",
"Graph Matching",
"image-classification",
"Image Classification",
"Object Recognition",
"Unsupervised Domain Adaptation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-timememory-efficient-deep
|
1706.00046
| null | null |
Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks
|
We propose to focus on the problem of discovering neural network
architectures efficient in terms of both prediction quality and cost. For
instance, our approach is able to solve the following tasks: learn a neural
network able to predict well in less than 100 milliseconds or learn an
efficient model that fits in a 50 Mb memory. Our contribution is a novel family
of models called Budgeted Super Networks (BSN). They are learned using gradient
descent techniques applied on a budgeted learning objective function which
integrates a maximum authorized cost, while making no assumption on the nature
of this cost. We present a set of experiments on computer vision problems and
analyze the ability of our technique to deal with three different costs: the
computation cost, the memory consumption cost and a distributed computation
cost. We particularly show that our model can discover neural network
architectures that have a better accuracy than the ResNet and Convolutional
Neural Fabrics architectures on CIFAR-10 and CIFAR-100, at a lower cost.
|
We propose to focus on the problem of discovering neural network architectures efficient in terms of both prediction quality and cost.
|
http://arxiv.org/abs/1706.00046v4
|
http://arxiv.org/pdf/1706.00046v4.pdf
|
CVPR 2018 6
|
[
"Tom Veniat",
"Ludovic Denoyer"
] |
[] | 2017-05-31T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Veniat_Learning_TimeMemory-Efficient_Deep_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Veniat_Learning_TimeMemory-Efficient_Deep_CVPR_2018_paper.pdf
|
learning-timememory-efficient-deep-1
| null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/functional-decision-theory-a-new-theory-of
|
1710.05060
| null | null |
Functional Decision Theory: A New Theory of Instrumental Rationality
|
This paper describes and motivates a new decision theory known as functional
decision theory (FDT), as distinct from causal decision theory and evidential
decision theory. Functional decision theorists hold that the normative
principle for action is to treat one's decision as the output of a fixed
mathematical function that answers the question, "Which output of this very
function would yield the best outcome?" Adhering to this principle delivers a
number of benefits, including the ability to maximize wealth in an array of
traditional decision-theoretic and game-theoretic problems where CDT and EDT
perform poorly. Using one simple and coherent decision rule, functional
decision theorists (for example) achieve more utility than CDT on Newcomb's
problem, more utility than EDT on the smoking lesion problem, and more utility
than both in Parfit's hitchhiker problem. In this paper, we define FDT, explore
its prescriptions in a number of different decision problems, compare it to CDT
and EDT, and give philosophical justifications for FDT as a normative theory of
decision-making.
| null |
http://arxiv.org/abs/1710.05060v2
|
http://arxiv.org/pdf/1710.05060v2.pdf
| null |
[
"Eliezer Yudkowsky",
"Nate Soares"
] |
[
"Decision Making"
] | 2017-10-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/divide-and-conquer-networks
|
1611.02401
| null |
B1jscMbAW
|
Divide and Conquer Networks
|
We consider the learning of algorithmic tasks by mere observation of
input-output pairs. Rather than studying this as a black-box discrete
regression problem with no assumption whatsoever on the input-output mapping,
we concentrate on tasks that are amenable to the principle of divide and
conquer, and study what are its implications in terms of learning. This
principle creates a powerful inductive bias that we leverage with neural
architectures that are defined recursively and dynamically, by learning two
scale-invariant atomic operations: how to split a given input into smaller
sets, and how to merge two partially solved tasks into a larger partial
solution. Our model can be trained in weakly supervised environments, namely by
just observing input-output pairs, and in even weaker environments, using a
non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our
architecture, we can incorporate the computational complexity as a
regularization term that can be optimized by backpropagation. We demonstrate
the flexibility and efficiency of the Divide-and-Conquer Network on several
combinatorial and geometric tasks: convex hull, clustering, knapsack and
euclidean TSP. Thanks to the dynamic programming nature of our model, we show
significant improvements in terms of generalization error and computational
complexity.
|
Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation.
|
http://arxiv.org/abs/1611.02401v7
|
http://arxiv.org/pdf/1611.02401v7.pdf
|
ICLR 2018 1
|
[
"Alex Nowak-Vila",
"David Folqué",
"Joan Bruna"
] |
[
"Clustering",
"Inductive Bias"
] | 2016-11-08T00:00:00 |
https://openreview.net/forum?id=B1jscMbAW
|
https://openreview.net/pdf?id=B1jscMbAW
|
divide-and-conquer-networks-1
| null |
[] |
https://paperswithcode.com/paper/deep-denoising-rate-optimal-recovery-of
|
1805.08855
| null | null |
Rate-Optimal Denoising with Deep Neural Networks
|
Deep neural networks provide state-of-the-art performance for image
denoising, where the goal is to recover a near noise-free image from a noisy
observation. The underlying principle is that neural networks trained on large
datasets have empirically been shown to be able to generate natural images well
from a low-dimensional latent representation of the image. Given such a
generator network, a noisy image can be denoised by i) finding the closest
image in the range of the generator or by ii) passing it through an
encoder-generator architecture (known as an autoencoder). However, there is
little theory to justify this success, let alone to predict the denoising
performance as a function of the network parameters. In this paper we consider
the problem of denoising an image from additive Gaussian noise using the two
generator based approaches. In both cases, we assume the image is well
described by a deep neural network with ReLU activations functions, mapping a
$k$-dimensional code to an $n$-dimensional image. In the case of the
autoencoder, we show that the feedforward network reduces noise energy by a
factor of $O(k/n)$. In the case of optimizing over the range of a generative
model, we state and analyze a simple gradient algorithm that minimizes a
non-convex loss function, and provably reduces noise energy by a factor of
$O(k/n)$. We also demonstrate in numerical experiments that this denoising
performance is, indeed, achieved by generative priors learned from data.
| null |
http://arxiv.org/abs/1805.08855v2
|
http://arxiv.org/pdf/1805.08855v2.pdf
|
ICLR 2019 5
|
[
"Reinhard Heckel",
"Wen Huang",
"Paul Hand",
"Vladislav Voroninski"
] |
[
"Denoising",
"Image Denoising"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Feedforward Network**, or a **Multilayer Perceptron (MLP)**, is a neural network with solely densely connected layers. This is the classic neural network architecture of the literature. It consists of inputs $x$ passed through units $h$ (of which there can be many layers) to predict a target $y$. Activation functions are generally chosen to be non-linear to allow for flexible functional approximation.\r\n\r\nImage Source: Deep Learning, Goodfellow et al",
"full_name": "Feedforward Network",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Feedforward Network",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/optimal-transport-for-multi-source-domain
|
1803.04899
| null | null |
Optimal Transport for Multi-source Domain Adaptation under Target Shift
|
In this paper, we propose to tackle the problem of reducing discrepancies
between multiple domains referred to as multi-source domain adaptation and
consider it under the target shift assumption: in all domains we aim to solve a
classification problem with the same output classes, but with labels'
proportions differing across them. This problem, generally ignored in the vast
majority papers on domain adaptation papers, is nevertheless critical in
real-world applications, and we theoretically show its impact on the adaptation
success. To address this issue, we design a method based on optimal transport,
a theory that has been successfully used to tackle adaptation problems in
machine learning. Our method performs multi-source adaptation and target shift
correction simultaneously by learning the class probabilities of the unlabeled
target sample and the coupling allowing to align two (or more) probability
distributions. Experiments on both synthetic and real-world data related to
satellite image segmentation task show the superiority of the proposed method
over the state-of-the-art.
|
In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them.
|
http://arxiv.org/abs/1803.04899v3
|
http://arxiv.org/pdf/1803.04899v3.pdf
| null |
[
"Ievgen Redko",
"Nicolas Courty",
"Rémi Flamary",
"Devis Tuia"
] |
[
"Domain Adaptation",
"Image Segmentation",
"Semantic Segmentation"
] | 2018-03-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mean-actor-critic
|
1709.00503
| null | null |
Mean Actor Critic
|
We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action
continuous-state reinforcement learning. MAC is a policy gradient algorithm
that uses the agent's explicit representation of all action values to estimate
the gradient of the policy, rather than using only the actions that were
actually executed. We prove that this approach reduces variance in the policy
gradient estimate relative to traditional actor-critic methods. We show
empirical results on two control domains and on six Atari games, where MAC is
competitive with state-of-the-art policy search algorithms.
|
We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning.
|
http://arxiv.org/abs/1709.00503v2
|
http://arxiv.org/pdf/1709.00503v2.pdf
| null |
[
"Cameron Allen",
"Kavosh Asadi",
"Melrose Roderick",
"Abdel-rahman Mohamed",
"George Konidaris",
"Michael Littman"
] |
[
"Atari Games",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2017-09-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/counterfactual-mean-embedding-a-kernel-method
|
1805.08845
| null | null |
Counterfactual Mean Embeddings
|
Counterfactual inference has become a ubiquitous tool in online advertisement, recommendation systems, medical diagnosis, and econometrics. Accurate modeling of outcome distributions associated with different interventions -- known as counterfactual distributions -- is crucial for the success of these applications. In this work, we propose to model counterfactual distributions using a novel Hilbert space representation called counterfactual mean embedding (CME). The CME embeds the associated counterfactual distribution into a reproducing kernel Hilbert space (RKHS) endowed with a positive definite kernel, which allows us to perform causal inference over the entire landscape of the counterfactual distribution. Based on this representation, we propose a distributional treatment effect (DTE) that can quantify the causal effect over entire outcome distributions. Our approach is nonparametric as the CME can be estimated under the unconfoundedness assumption from observational data without requiring any parametric assumption about the underlying distributions. We also establish a rate of convergence of the proposed estimator which depends on the smoothness of the conditional mean and the Radon-Nikodym derivative of the underlying marginal distributions. Furthermore, our framework allows for more complex outcomes such as images, sequences, and graphs. Our experimental results on synthetic data and off-policy evaluation tasks demonstrate the advantages of the proposed estimator.
|
In this work, we propose to model counterfactual distributions using a novel Hilbert space representation called counterfactual mean embedding (CME).
|
https://arxiv.org/abs/1805.08845v4
|
https://arxiv.org/pdf/1805.08845v4.pdf
| null |
[
"Krikamol Muandet",
"Motonobu Kanagawa",
"Sorawit Saengkyongam",
"Sanparith Marukatat"
] |
[
"Causal Inference",
"counterfactual",
"Counterfactual Inference",
"Econometrics",
"Medical Diagnosis",
"Off-policy evaluation",
"Recommendation Systems"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal inference",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/distribution-matching-losses-can-hallucinate
|
1805.08841
| null | null |
Distribution Matching Losses Can Hallucinate Features in Medical Image Translation
|
This paper discusses how distribution matching losses, such as those used in
CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of
medical conditions. It seems appealing to use these new image synthesis methods
for translating images from a source to a target domain because they can
produce high quality images and some even do not require paired data. However,
the basis of how these image translation models work is through matching the
translation output to the distribution of the target domain. This can cause an
issue when the data provided in the target domain has an over or under
representation of some classes (e.g. healthy or sick). When the output of an
algorithm is a transformed image there are uncertainties whether all known and
unknown class labels have been preserved or changed. Therefore, we recommend
that these translated images should not be used for direct interpretation (e.g.
by doctors) because they may lead to misdiagnosis of patients based on
hallucinated image features by an algorithm that matches a distribution.
However there are many recent papers that seem as though this is the goal.
|
When the output of an algorithm is a transformed image there are uncertainties whether all known and unknown class labels have been preserved or changed.
|
http://arxiv.org/abs/1805.08841v3
|
http://arxiv.org/pdf/1805.08841v3.pdf
| null |
[
"Joseph Paul Cohen",
"Margaux Luck",
"Sina Honari"
] |
[
"Image Generation",
"Translation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/clustering-what-both-theoreticians-and
|
1805.08838
| null | null |
Clustering - What Both Theoreticians and Practitioners are Doing Wrong
|
Unsupervised learning is widely recognized as one of the most important
challenges facing machine learning nowa- days. However, in spite of hundreds of
papers on the topic being published every year, current theoretical
understanding and practical implementations of such tasks, in particular of
clustering, is very rudimentary. This note focuses on clustering. I claim that
the most signif- icant challenge for clustering is model selection. In contrast
with other common computational tasks, for clustering, dif- ferent algorithms
often yield drastically different outcomes. Therefore, the choice of a
clustering algorithm, and their pa- rameters (like the number of clusters) may
play a crucial role in the usefulness of an output clustering solution.
However, currently there exists no methodical guidance for clustering
tool-selection for a given clustering task. Practitioners pick the algorithms
they use without awareness to the implications of their choices and the vast
majority of theory of clustering papers focus on providing savings to the
resources needed to solve optimization problems that arise from picking some
concrete clustering objective. Saving that pale in com- parison to the costs of
mismatch between those objectives and the intended use of clustering results. I
argue the severity of this problem and describe some recent proposals aiming to
address this crucial lacuna.
| null |
http://arxiv.org/abs/1805.08838v1
|
http://arxiv.org/pdf/1805.08838v1.pdf
| null |
[
"Shai Ben-David"
] |
[
"Clustering",
"Model Selection"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nonparametric-density-estimation-under
|
1805.08836
| null | null |
Nonparametric Density Estimation under Adversarial Losses
|
We study minimax convergence rates of nonparametric density estimation under
a large class of loss functions called "adversarial losses", which, besides
classical $\mathcal{L}^p$ losses, includes maximum mean discrepancy (MMD),
Wasserstein distance, and total variation distance. These losses are closely
related to the losses encoded by discriminator networks in generative
adversarial networks (GANs). In a general framework, we study how the choice of
loss and the assumed smoothness of the underlying density together determine
the minimax rate. We also discuss implications for training GANs based on deep
ReLU networks, and more general connections to learning implicit generative
models in a minimax statistical sense.
| null |
http://arxiv.org/abs/1805.08836v2
|
http://arxiv.org/pdf/1805.08836v2.pdf
|
NeurIPS 2018 12
|
[
"Shashank Singh",
"Ananya Uppal",
"Boyue Li",
"Chun-Liang Li",
"Manzil Zaheer",
"Barnabás Póczos"
] |
[
"Density Estimation"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/8225-nonparametric-density-estimation-under-adversarial-losses
|
http://papers.nips.cc/paper/8225-nonparametric-density-estimation-under-adversarial-losses.pdf
|
nonparametric-density-estimation-under-1
| null |
[] |
https://paperswithcode.com/paper/local-differential-privacy-for-evolving-data
|
1802.07128
| null | null |
Local Differential Privacy for Evolving Data
|
There are now several large scale deployments of differential privacy used to
collect statistical information about users. However, these deployments
periodically recollect the data and recompute the statistics using algorithms
designed for a single use. As a result, these systems do not provide meaningful
privacy guarantees over long time scales. Moreover, existing techniques to
mitigate this effect do not apply in the "local model" of differential privacy
that these systems use.
In this paper, we introduce a new technique for local differential privacy
that makes it possible to maintain up-to-date statistics over time, with
privacy guarantees that degrade only in the number of changes in the underlying
distribution rather than the number of collection periods. We use our technique
for tracking a changing statistic in the setting where users are partitioned
into an unknown collection of groups, and at every time period each user draws
a single bit from a common (but changing) group-specific distribution. We also
provide an application to frequency and heavy-hitter estimation.
| null |
http://arxiv.org/abs/1802.07128v3
|
http://arxiv.org/pdf/1802.07128v3.pdf
|
NeurIPS 2018 12
|
[
"Matthew Joseph",
"Aaron Roth",
"Jonathan Ullman",
"Bo Waggoner"
] |
[] | 2018-02-20T00:00:00 |
http://papers.nips.cc/paper/7505-local-differential-privacy-for-evolving-data
|
http://papers.nips.cc/paper/7505-local-differential-privacy-for-evolving-data.pdf
|
local-differential-privacy-for-evolving-data-1
| null |
[] |
https://paperswithcode.com/paper/characteristic-and-universal-tensor-product
|
1708.08157
| null | null |
Characteristic and Universal Tensor Product Kernels
|
Maximum mean discrepancy (MMD), also called energy distance or N-distance in
statistics and Hilbert-Schmidt independence criterion (HSIC), specifically
distance covariance in statistics, are among the most popular and successful
approaches to quantify the difference and independence of random variables,
respectively. Thanks to their kernel-based foundations, MMD and HSIC are
applicable on a wide variety of domains. Despite their tremendous success,
quite little is known about when HSIC characterizes independence and when MMD
with tensor product kernel can discriminate probability distributions. In this
paper, we answer these questions by studying various notions of characteristic
property of the tensor product kernel.
| null |
http://arxiv.org/abs/1708.08157v4
|
http://arxiv.org/pdf/1708.08157v4.pdf
| null |
[
"Zoltan Szabo",
"Bharath K. Sriperumbudur"
] |
[] | 2017-08-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rapid-seismic-domain-transfer-seismic
|
1805.08826
| null | null |
Rapid seismic domain transfer: Seismic velocity inversion and modeling using deep generative neural networks
|
Traditional physics-based approaches to infer sub-surface properties such as
full-waveform inversion or reflectivity inversion are time-consuming and
computationally expensive. We present a deep-learning technique that eliminates
the need for these computationally complex methods by posing the problem as one
of domain transfer. Our solution is based on a deep convolutional generative
adversarial network and dramatically reduces computation time. Training based
on two different types of synthetic data produced a neural network that
generates realistic velocity models when applied to a real dataset. The
system's ability to generalize means it is robust against the inherent
occurrence of velocity errors and artifacts in both training and test datasets.
| null |
http://arxiv.org/abs/1805.08826v1
|
http://arxiv.org/pdf/1805.08826v1.pdf
| null |
[
"Lukas Mosser",
"Wouter Kimman",
"Jesper Dramsch",
"Steve Purves",
"Alfredo De la Fuente",
"Graham Ganssle"
] |
[
"Generative Adversarial Network"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/combo-loss-handling-input-and-output
|
1805.02798
| null | null |
Combo Loss: Handling Input and Output Imbalance in Multi-Organ Segmentation
|
Simultaneous segmentation of multiple organs from different medical imaging modalities is a crucial task as it can be utilized for computer-aided diagnosis, computer-assisted surgery, and therapy planning. Thanks to the recent advances in deep learning, several deep neural networks for medical image segmentation have been introduced successfully for this purpose. In this paper, we focus on learning a deep multi-organ segmentation network that labels voxels. In particular, we examine the critical choice of a loss function in order to handle the notorious imbalance problem that plagues both the input and output of a learning model. The input imbalance refers to the class-imbalance in the input training samples (i.e., small foreground objects embedded in an abundance of background voxels, as well as organs of varying sizes). The output imbalance refers to the imbalance between the false positives and false negatives of the inference model. In order to tackle both types of imbalance during training and inference, we introduce a new curriculum learning based loss function. Specifically, we leverage Dice similarity coefficient to deter model parameters from being held at bad local minima and at the same time gradually learn better model parameters by penalizing for false positives/negatives using a cross entropy term. We evaluated the proposed loss function on three datasets: whole body positron emission tomography (PET) scans with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and ultrasound echocardigraphy images with a single target organ i.e., left ventricular. We show that a simple network architecture with the proposed integrative loss function can outperform state-of-the-art methods and results of the competing methods can be improved when our proposed loss is used.
|
The output imbalance refers to the imbalance between the false positives and false negatives of the inference model.
|
https://arxiv.org/abs/1805.02798v6
|
https://arxiv.org/pdf/1805.02798v6.pdf
| null |
[
"Saeid Asgari Taghanaki",
"Yefeng Zheng",
"S. Kevin Zhou",
"Bogdan Georgescu",
"Puneet Sharma",
"Daguang Xu",
"Dorin Comaniciu",
"Ghassan Hamarneh"
] |
[
"Image Segmentation",
"Medical Image Segmentation",
"Organ Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-05-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geometry-based-data-generation
|
1802.04927
| null | null |
Geometry-Based Data Generation
|
Many generative models attempt to replicate the density of their input data.
However, this approach is often undesirable, since data density is highly
affected by sampling biases, noise, and artifacts. We propose a method called
SUGAR (Synthesis Using Geometrically Aligned Random-walks) that uses a
diffusion process to learn a manifold geometry from the data. Then, it
generates new points evenly along the manifold by pulling randomly generated
points into its intrinsic structure using a diffusion kernel. SUGAR equalizes
the density along the manifold by selectively generating points in sparse areas
of the manifold. We demonstrate how the approach corrects sampling biases and
artifacts, while also revealing intrinsic patterns (e.g. progression) and
relations in the data. The method is applicable for correcting missing data,
finding hypothetical data points, and learning relationships between data
features.
|
Then, it generates new points evenly along the manifold by pulling randomly generated points into its intrinsic structure using a diffusion kernel.
|
http://arxiv.org/abs/1802.04927v4
|
http://arxiv.org/pdf/1802.04927v4.pdf
| null |
[
"Ofir Lindenbaum",
"Jay S. Stanley III",
"Guy Wolf",
"Smita Krishnaswamy"
] |
[] | 2018-02-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-neural-architecture-construction-using
|
1803.06744
| null | null |
Fast Neural Architecture Construction using EnvelopeNets
|
Fast Neural Architecture Construction (NAC) is a method to construct deep
network architectures by pruning and expansion of a base network. In recent
years, several automated search methods for neural network architectures have
been proposed using methods such as evolutionary algorithms and reinforcement
learning. These methods use a single scalar objective function (usually
accuracy) that is evaluated after a full training and evaluation cycle. In
contrast NAC directly compares the utility of different filters using
statistics derived from filter featuremaps reach a state where the utility of
different filters within a network can be compared and hence can be used to
construct networks. The training epochs needed for filters within a network to
reach this state is much less than the training epochs needed for the accuracy
of a network to stabilize. NAC exploits this finding to construct convolutional
neural nets (CNNs) with close to state of the art accuracy, in < 1 GPU day,
faster than most of the current neural architecture search methods. The
constructed networks show close to state of the art performance on the image
classification problem on well known datasets (CIFAR-10, ImageNet) and
consistently show better performance than hand constructed and randomly
generated networks of the same depth, operators and approximately the same
number of parameters.
|
Fast Neural Architecture Construction (NAC) is a method to construct deep network architectures by pruning and expansion of a base network.
|
http://arxiv.org/abs/1803.06744v3
|
http://arxiv.org/pdf/1803.06744v3.pdf
| null |
[
"Purushotham Kamath",
"Abhishek Singh",
"Debo Dutta"
] |
[
"Evolutionary Algorithms",
"GPU",
"image-classification",
"Image Classification",
"Neural Architecture Search",
"Reinforcement Learning"
] | 2018-03-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/infinite-task-learning-with-rkhss
|
1805.08809
| null | null |
Infinite-Task Learning with RKHSs
|
Machine learning has witnessed tremendous success in solving tasks depending
on a single hyperparameter. When considering simultaneously a finite number of
tasks, multi-task learning enables one to account for the similarities of the
tasks via appropriate regularizers. A step further consists of learning a
continuum of tasks for various loss functions. A promising approach, called
\emph{Parametric Task Learning}, has paved the way in the continuum setting for
affine models and piecewise-linear loss functions. In this work, we introduce a
novel approach called \emph{Infinite Task Learning} whose goal is to learn a
function whose output is a function over the hyperparameter space. We leverage
tools from operator-valued kernels and the associated vector-valued RKHSs that
provide an explicit control over the role of the hyperparameters, and also
allows us to consider new type of constraints. We provide generalization
guarantees to the suggested scheme and illustrate its efficiency in
cost-sensitive classification, quantile regression and density level set
estimation.
| null |
http://arxiv.org/abs/1805.08809v3
|
http://arxiv.org/pdf/1805.08809v3.pdf
| null |
[
"Romain Brault",
"Alex Lambert",
"Zoltán Szabó",
"Maxime Sangnier",
"Florence d'Alché-Buc"
] |
[
"Multi-Task Learning",
"quantile regression"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deformable-part-networks
|
1805.08808
| null | null |
Deformable Part Networks
|
In this paper we propose novel Deformable Part Networks (DPNs) to learn {\em
pose-invariant} representations for 2D object recognition. In contrast to the
state-of-the-art pose-aware networks such as CapsNet \cite{sabour2017dynamic}
and STN \cite{jaderberg2015spatial}, DPNs can be naturally {\em interpreted} as
an efficient solver for a challenging detection problem, namely Localized
Deformable Part Models (LDPMs) where localization is introduced to DPMs as
another latent variable for searching for the best poses of objects over all
pixels and (predefined) scales. In particular we construct DPNs as sequences of
such LDPM units to model the semantic and spatial relations among the
deformable parts as hierarchical composition and spatial parsing trees.
Empirically our 17-layer DPN can outperform both CapsNets and STNs
significantly on affNIST \cite{sabour2017dynamic}, for instance, by 19.19\% and
12.75\%, respectively, with better generalization and better tolerance to
affine transformations.
| null |
http://arxiv.org/abs/1805.08808v1
|
http://arxiv.org/pdf/1805.08808v1.pdf
| null |
[
"Ziming Zhang",
"Rongmei Lin",
"Alan Sullivan"
] |
[
"Object Recognition"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/equivalence-of-equilibrium-propagation-and
|
1711.08416
| null | null |
Equivalence of Equilibrium Propagation and Recurrent Backpropagation
|
Recurrent Backpropagation and Equilibrium Propagation are supervised learning
algorithms for fixed point recurrent neural networks which differ in their
second phase. In the first phase, both algorithms converge to a fixed point
which corresponds to the configuration where the prediction is made. In the
second phase, Equilibrium Propagation relaxes to another nearby fixed point
corresponding to smaller prediction error, whereas Recurrent Backpropagation
uses a side network to compute error derivatives iteratively. In this work we
establish a close connection between these two algorithms. We show that, at
every moment in the second phase, the temporal derivatives of the neural
activities in Equilibrium Propagation are equal to the error derivatives
computed iteratively by Recurrent Backpropagation in the side network. This
work shows that it is not required to have a side network for the computation
of error derivatives, and supports the hypothesis that, in biological neural
networks, temporal derivatives of neural activities may code for error signals.
|
Recurrent Backpropagation and Equilibrium Propagation are supervised learning algorithms for fixed point recurrent neural networks which differ in their second phase.
|
http://arxiv.org/abs/1711.08416v2
|
http://arxiv.org/pdf/1711.08416v2.pdf
| null |
[
"Benjamin Scellier",
"Yoshua Bengio"
] |
[] | 2017-11-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-view-graph-convolutional-network-and
|
1805.08801
| null | null |
Multi-View Graph Convolutional Network and Its Applications on Neuroimage Analysis for Parkinson's Disease
|
Parkinson's Disease (PD) is one of the most prevalent neurodegenerative diseases that affects tens of millions of Americans. PD is highly progressive and heterogeneous. Quite a few studies have been conducted in recent years on predictive or disease progression modeling of PD using clinical and biomarkers data. Neuroimaging, as another important information source for neurodegenerative disease, has also arisen considerable interests from the PD community. In this paper, we propose a deep learning method based on Graph Convolutional Networks (GCN) for fusing multiple modalities of brain images in relationship prediction which is useful for distinguishing PD cases from controls. On Parkinson's Progression Markers Initiative (PPMI) cohort, our approach achieved $0.9537\pm 0.0587$ AUC, compared with $0.6443\pm 0.0223$ AUC achieved by traditional approaches such as PCA.
|
Parkinson's Disease (PD) is one of the most prevalent neurodegenerative diseases that affects tens of millions of Americans.
|
https://arxiv.org/abs/1805.08801v4
|
https://arxiv.org/pdf/1805.08801v4.pdf
| null |
[
"Xi Sheryl Zhang",
"Lifang He",
"Kun Chen",
"Yuan Luo",
"Jiayu Zhou",
"Fei Wang"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A Graph Convolutional Network, or GCN, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of convolutional neural networks which operate directly on graphs.\r\n\r\nImage source: [Semi-Supervised Classification with Graph Convolutional Networks](https://arxiv.org/pdf/1609.02907v4.pdf)",
"full_name": "Graph Convolutional Networks",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "Graph Convolutional Networks",
"source_title": "Semi-Supervised Classification with Graph Convolutional Networks",
"source_url": "http://arxiv.org/abs/1609.02907v4"
},
{
"code_snippet_url": null,
"description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)",
"full_name": "Principal Components Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "PCA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/super-learning-in-the-sas-system
|
1805.08058
| null | null |
Super learning in the SAS system
|
Background and objective: Stacking is an ensemble machine learning method that averages predictions from multiple other algorithms, such as generalized linear models and regression trees. An implementation of stacking, called super learning, has been developed as a general approach to supervised learning and has seen frequent usage, in part due to the availability of an R package. We develop super learning in the SAS software system using a new macro, and demonstrate its performance relative to the R package. Methods: Following previous work using the R SuperLearner package we assess the performance of super learning in a number of domains. We compare the R package with the new SAS macro in a small set of simulations assessing curve fitting in a predictive model as well in a set of 14 publicly available datasets to assess cross-validated accuracy. Results: Across the simulated data and the publicly available data, the SAS macro performed similarly to the R package, despite a different set of potential algorithms available natively in R and SAS. Conclusions: Our super learner macro performs as well as the R package at a number of tasks. Further, by extending the macro to include the use of R packages, the macro can leverage both the robust, enterprise oriented procedures in SAS and the nimble, cutting edge packages in R. In the spirit of ensemble learning, this macro extends the potential library of algorithms beyond a single software system and provides a simple avenue into machine learning in SAS.
| null |
https://arxiv.org/abs/1805.08058v3
|
https://arxiv.org/pdf/1805.08058v3.pdf
| null |
[
"Alexander P. Keil",
"Daniel Westreich",
"Jessie K Edwards",
"Stephen R Cole"
] |
[
"BIG-bench Machine Learning",
"Causal Inference",
"Ensemble Learning"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-scene-perception-system-for-visually
|
1805.08798
| null | null |
A scene perception system for visually impaired based on object detection and classification using multi-modal DCNN
|
This paper represents a cost-effective scene perception system aimed towards
visually impaired individual. We use an odroid system integrated with an USB
camera and USB laser that can be attached on the chest. The system classifies
the detected objects along with its distance from the user and provides a voice
output. Experimental results provided in this paper use outdoor traffic scenes.
The object detection and classification framework exploits a multi-modal fusion
based faster RCNN using motion, sharpening and blurring filters for efficient
feature representation.
| null |
http://arxiv.org/abs/1805.08798v1
|
http://arxiv.org/pdf/1805.08798v1.pdf
| null |
[
"Baljit Kaur",
"Jhilik Bhattacharya"
] |
[
"General Classification",
"object-detection",
"Object Detection"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/expectation-propagation-a-probabilistic-view
|
1805.08786
| null | null |
Mean Field Theory of Activation Functions in Deep Neural Networks
|
We present a Statistical Mechanics (SM) model of deep neural networks, connecting the energy-based and the feed forward networks (FFN) approach. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. From the meanfield solution of the model, we obtain a set of natural activations -- such as Sigmoid, $\tanh$ and ReLu -- together with the state-of-the-art, Swish; this represents the expected information propagating through the network and tends to ReLu in the limit of zero noise.We study the spectrum of the Hessian on an associated classification task, showing that Swish allows for more consistent performances over a wider range of network architectures.
|
We present a Statistical Mechanics (SM) model of deep neural networks, connecting the energy-based and the feed forward networks (FFN) approach.
|
https://arxiv.org/abs/1805.08786v2
|
https://arxiv.org/pdf/1805.08786v2.pdf
| null |
[
"Mirco Milletarí",
"Thiparat Chotibut",
"Paolo E. Trevisanutto"
] |
[
"General Classification"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I file a claim against Expedia?\r\nHow Do I File a Claim Against Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Fast Help & Exclusive Travel Discounts!Need to file a claim with Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for expert assistance and unlock exclusive best deal offers on hotels, flights, and vacation packages. Get fast resolution on your travel issues while enjoying limited-time discounts that make your next trip smoother, more affordable, and stress-free. Call today—don’t miss out!\r\n\r\n\r\nHow do I file a claim against Expedia?\r\nHow Do I File a Claim Against Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Fast Help & Exclusive Travel Discounts!Need to file a claim with Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for expert assistance and unlock exclusive best deal offers on hotels, flights, and vacation packages. Get fast resolution on your travel issues while enjoying limited-time discounts that make your next trip smoother, more affordable, and stress-free. Call today—don’t miss out!",
"full_name": "(FiLe@Against@Claim)How do I file a claim against Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "(FiLe@Against@Claim)How do I file a claim against Expedia?",
"source_title": "Searching for Activation Functions",
"source_url": "http://arxiv.org/abs/1710.05941v2"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-convolutional-feature-map-based-deep
|
1805.08769
| null | null |
A Convolutional Feature Map based Deep Network targeted towards Traffic Detection and Classification
|
This research mainly emphasizes on traffic detection thus essentially
involving object detection and classification. The particular work discussed
here is motivated from unsatisfactory attempts of re-using well known
pre-trained object detection networks for domain specific data. In this course,
some trivial issues leading to prominent performance drop are identified and
ways to resolve them are discussed. For example, some simple yet relevant
tricks regarding data collection and sampling prove to be very beneficial.
Also, introducing a blur net to deal with blurred real time data is another
important factor promoting performance elevation. We further study the neural
network design issues for beneficial object classification and involve shared,
region-independent convolutional features. Adaptive learning rates to deal with
saddle points are also investigated and an average covariance matrix based
pre-conditioned approach is proposed. We also introduce the use of optical flow
features to accommodate orientation information. Experimental results
demonstrate that this results in a steady rise in the performance rate.
| null |
http://arxiv.org/abs/1805.08769v1
|
http://arxiv.org/pdf/1805.08769v1.pdf
| null |
[
"Baljit Kaur",
"Jhilik Bhattacharya"
] |
[
"General Classification",
"Object",
"object-detection",
"Object Detection",
"Optical Flow Estimation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sparse-binary-compression-towards-distributed
|
1805.08768
| null |
B1edvs05Y7
|
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
|
Currently, progressively larger deep neural networks are trained on ever
growing data corpora. As this trend is only going to increase in the future,
distributed training schemes are becoming increasingly relevant. A major issue
in distributed training is the limited communication bandwidth between
contributing nodes or prohibitive communication cost in general. These
challenges become even more pressing, as the number of computation nodes
increases. To counteract this development we propose sparse binary compression
(SBC), a compression framework that allows for a drastic reduction of
communication cost for distributed training. SBC combines existing techniques
of communication delay and gradient sparsification with a novel binarization
method and optimal weight update encoding to push compression gains to new
limits. By doing so, our method also allows us to smoothly trade-off gradient
sparsity and temporal sparsity to adapt to the requirements of the learning
task. Our experiments show, that SBC can reduce the upstream communication on a
variety of convolutional and recurrent neural network architectures by more
than four orders of magnitude without significantly harming the convergence
speed in terms of forward-backward passes. For instance, we can train ResNet50
on ImageNet in the same number of iterations to the baseline accuracy, using
$\times 3531$ less bits or train it to a $1\%$ lower accuracy using $\times
37208$ less bits. In the latter case, the total upstream communication required
is cut from 125 terabytes to 3.35 gigabytes for every participating client.
| null |
http://arxiv.org/abs/1805.08768v1
|
http://arxiv.org/pdf/1805.08768v1.pdf
| null |
[
"Felix Sattler",
"Simon Wiedemann",
"Klaus-Robert Müller",
"Wojciech Samek"
] |
[
"Binarization",
"Deep Learning"
] | 2018-05-22T00:00:00 |
https://openreview.net/forum?id=B1edvs05Y7
|
https://openreview.net/pdf?id=B1edvs05Y7
| null | null |
[
{
"code_snippet_url": "",
"description": "**Gradient Sparsification** is a technique for distributed training that sparsifies stochastic gradients to reduce the communication cost, with minor increase in the number of iterations. The key idea behind our sparsification technique is to drop some coordinates of the stochastic gradient and appropriately amplify the remaining coordinates to ensure the unbiasedness of the sparsified stochastic gradient. The sparsification approach can significantly reduce the coding length of the stochastic gradient and only slightly increase the variance of the stochastic gradient.",
"full_name": "Gradient Sparsification",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "This section contains a compilation of distributed methods for scaling deep learning to very large models. There are many different strategies for scaling training across multiple devices, including:\r\n\r\n - [Data Parallel](https://paperswithcode.com/methods/category/data-parallel-methods) : for each node we use the same model parameters to do forward propagation, but we send a small batch of different data to each node, compute the gradient normally, and send it back to the main node. Once we have all the gradients, we calculate the weighted average and use this to update the model parameters.\r\n\r\n - [Model Parallel](https://paperswithcode.com/methods/category/model-parallel-methods) : for each node we assign different layers to it. During forward propagation, we start in the node with the first layers, then move onto the next, and so on. Once forward propagation is done we calculate gradients for the last node, and update model parameters for that node. Then we backpropagate onto the penultimate node, update the parameters, and so on.\r\n\r\n - Additional methods including [Hybrid Parallel](https://paperswithcode.com/methods/category/hybrid-parallel-methods), [Auto Parallel](https://paperswithcode.com/methods/category/auto-parallel-methods), and [Distributed Communication](https://paperswithcode.com/methods/category/distributed-communication).\r\n\r\nImage credit: [Jordi Torres](https://towardsdatascience.com/scalable-deep-learning-on-parallel-and-distributed-infrastructures-e5fb4a956bef).",
"name": "Distributed Methods",
"parent": null
},
"name": "Gradient Sparsification",
"source_title": "Gradient Sparsification for Communication-Efficient Distributed Optimization",
"source_url": "http://arxiv.org/abs/1710.09854v1"
}
] |
https://paperswithcode.com/paper/clinical-parameters-prediction-for-gait
|
1806.04627
| null | null |
Clinical Parameters Prediction for Gait Disorder Recognition
|
Being able to predict clinical parameters in order to diagnose gait disorders
in a patient is of great value in planning treatments. It is known that
\textit{decision parameters} such as cadence, step length, and walking speed
are critical in the diagnosis of gait disorders in patients. This project aims
to predict the decision parameters using two ways and afterwards giving advice
on whether a patient needs treatment or not. In one way, we use clinically
measured parameters such as Ankle Dorsiflexion, age, walking speed, step
length, stride length, weight over height squared (BMI) and etc. to predict the
decision parameters. In a second way, we use videos recorded from patient's
walking tests in a clinic in order to extract the coordinates of the joints of
the patient over time and predict the decision parameters. Finally, having the
decision parameters we pre-classify gait disorder intensity of a patient and as
the result make decisions on whether a patient needs treatment or not.
| null |
http://arxiv.org/abs/1806.04627v1
|
http://arxiv.org/pdf/1806.04627v1.pdf
| null |
[
"Soheil Esmaeilzadeh",
"Ouassim Khebzegga",
"Mehrad Moradshahi"
] |
[
"Prediction"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adef-an-iterative-algorithm-to-construct
|
1804.07729
| null |
Hk4dFjR5K7
|
ADef: an Iterative Algorithm to Construct Adversarial Deformations
|
While deep neural networks have proven to be a powerful tool for many
recognition and classification tasks, their stability properties are still not
well understood. In the past, image classifiers have been shown to be
vulnerable to so-called adversarial attacks, which are created by additively
perturbing the correctly classified image. In this paper, we propose the ADef
algorithm to construct a different kind of adversarial attack created by
iteratively applying small deformations to the image, found through a gradient
descent step. We demonstrate our results on MNIST with convolutional neural
networks and on ImageNet with Inception-v3 and ResNet-101.
|
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood.
|
http://arxiv.org/abs/1804.07729v3
|
http://arxiv.org/pdf/1804.07729v3.pdf
|
ICLR 2019 5
|
[
"Rima Alaifari",
"Giovanni S. Alberti",
"Tandri Gauksson"
] |
[
"Adversarial Attack",
"General Classification"
] | 2018-04-20T00:00:00 |
https://openreview.net/forum?id=Hk4dFjR5K7
|
https://openreview.net/pdf?id=Hk4dFjR5K7
|
adef-an-iterative-algorithm-to-construct-1
| null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks.",
"full_name": "Auxiliary Classifier",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "The following is a list of miscellaneous components used in neural networks.",
"name": "Miscellaneous Components",
"parent": null
},
"name": "Auxiliary Classifier",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/fd8e2064e094f301d910b91a757b860aae3e3116/torch/optim/rmsprop.py#L69-L108",
"description": "**RMSProp** is an unpublished adaptive learning rate optimizer [proposed by Geoff Hinton](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The motivation is that the magnitude of gradients can differ for different weights, and can change during learning, making it hard to choose a single global learning rate. RMSProp tackles this by keeping a moving average of the squared gradient and adjusting the weight updates by this magnitude. The gradient updates are performed as:\r\n\r\n$$E\\left[g^{2}\\right]\\_{t} = \\gamma E\\left[g^{2}\\right]\\_{t-1} + \\left(1 - \\gamma\\right) g^{2}\\_{t}$$\r\n\r\n$$\\theta\\_{t+1} = \\theta\\_{t} - \\frac{\\eta}{\\sqrt{E\\left[g^{2}\\right]\\_{t} + \\epsilon}}g\\_{t}$$\r\n\r\nHinton suggests $\\gamma=0.9$, with a good default for $\\eta$ as $0.001$.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)",
"full_name": "RMSProp",
"introduced_year": 2013,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "RMSProp",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L210",
"description": "**Inception-v3 Module** is an image block used in the [Inception-v3](https://paperswithcode.com/method/inception-v3) architecture. This architecture is used on the coarsest (8 × 8) grids to promote high dimensional representations.",
"full_name": "Inception-v3 Module",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-v3 Module",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L64",
"description": "**Inception-v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of [batch normalization](https://paperswithcode.com/method/batch-normalization) for layers in the sidehead).",
"full_name": "Inception-v3",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Inception-v3",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
}
] |
https://paperswithcode.com/paper/on-semi-supervised-learning
|
1805.09180
| null | null |
On semi-supervised learning
|
Semi-supervised learning deals with the problem of how, if possible, to take advantage of a huge amount of unclassified data, to perform a classification in situations when, typically, there is little labeled data. Even though this is not always possible (it depends on how useful, for inferring the labels, it would be to know the distribution of the unlabeled data), several algorithm have been proposed recently. %but in general they are not proved to outperform A new algorithm is proposed, that under almost necessary conditions, %and it is proved that it attains asymptotically the performance of the best theoretical rule as the amount of unlabeled data tends to infinity. The set of necessary assumptions, although reasonable, show that semi-supervised classification only works for very well conditioned problems. The focus is on understanding when and why semi-supervised learning works when the size of the initial training sample remains fixed and the asymptotic is on the size of the unlabeled data. The performance of the algorithm is assessed in the well known "Isolet" real-data of phonemes, where a strong dependence on the choice of the initial training sample is shown.
| null |
https://arxiv.org/abs/1805.09180v3
|
https://arxiv.org/pdf/1805.09180v3.pdf
| null |
[
"Alejandro Cholaquidis",
"Ricardo Fraimand",
"Mariela Sued"
] |
[
"General Classification"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fake-news-detection-with-deep-diffusive
|
1805.08751
| null | null |
FAKEDETECTOR: Effective Fake News Detection with Deep Diffusive Neural Network
|
In recent years, due to the booming development of online social networks, fake news for various commercial and political purposes has been appearing in large numbers and widespread in the online world. With deceptive words, online social network users can get infected by these online fake news easily, which has brought about tremendous effects on the offline society already. An important goal in improving the trustworthiness of information in online social networks is to identify the fake news timely. This paper aims at investigating the principles, methodologies and algorithms for detecting fake news articles, creators and subjects from online social networks and evaluating the corresponding performance. This paper addresses the challenges introduced by the unknown characteristics of fake news and diverse connections among news articles, creators and subjects. This paper introduces a novel automatic fake news credibility inference model, namely FAKEDETECTOR. Based on a set of explicit and latent features extracted from the textual information, FAKEDETECTOR builds a deep diffusive network model to learn the representations of news articles, creators and subjects simultaneously. Extensive experiments have been done on a real-world fake news dataset to compare FAKEDETECTOR with several state-of-the-art models, and the experimental results have demonstrated the effectiveness of the proposed model.
|
This paper aims at investigating the principles, methodologies and algorithms for detecting fake news articles, creators and subjects from online social networks and evaluating the corresponding performance.
|
https://arxiv.org/abs/1805.08751v2
|
https://arxiv.org/pdf/1805.08751v2.pdf
| null |
[
"Jiawei Zhang",
"Bowen Dong",
"Philip S. Yu"
] |
[
"Articles",
"Fake News Detection"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-tropical-approach-to-neural-networks-with
|
1805.08749
| null | null |
A Tropical Approach to Neural Networks with Piecewise Linear Activations
|
We present a new, unifying approach following some recent developments on the
complexity of neural networks with piecewise linear activations. We treat
neural network layers with piecewise linear activations as tropical
polynomials, which generalize polynomials in the so-called $(\max, +)$ or
tropical algebra, with possibly real-valued exponents. Motivated by the
discussion in (arXiv:1402.1869), this approach enables us to refine their upper
bounds on linear regions of layers with ReLU or leaky ReLU activations to
$\min\left\{ 2^m, \sum_{j=0}^n \binom{m}{j} \right\}$, where $n, m$ are the
number of inputs and outputs, respectively. Additionally, we recover their
upper bounds on maxout layers. Our work follows a novel path, exclusively under
the lens of tropical geometry, which is independent of the improvements
reported in (arXiv:1611.01491, arXiv:1711.02114). Finally, we present a
geometric approach for effective counting of linear regions using random
sampling in order to avoid the computational overhead of exact counting
approaches
| null |
http://arxiv.org/abs/1805.08749v2
|
http://arxiv.org/pdf/1805.08749v2.pdf
| null |
[
"Vasileios Charisopoulos",
"Petros Maragos"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://gist.github.com/daskol/05439f018465c8fb42ae547b8cc8a77b",
"description": "The **Maxout Unit** is a generalization of the [ReLU](https://paperswithcode.com/method/relu) and the [leaky ReLU](https://paperswithcode.com/method/leaky-relu) functions. It is a piecewise linear function that returns the maximum of the inputs, designed to be used in conjunction with [dropout](https://paperswithcode.com/method/dropout). Both ReLU and leaky ReLU are special cases of Maxout. \r\n\r\n$$f\\left(x\\right) = \\max\\left(w^{T}\\_{1}x + b\\_{1}, w^{T}\\_{2}x + b\\_{2}\\right)$$\r\n\r\nThe main drawback of Maxout is that it is computationally expensive as it doubles the number of parameters for each neuron.",
"full_name": "Maxout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Maxout",
"source_title": "Maxout Networks",
"source_url": "http://arxiv.org/abs/1302.4389v4"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!\r\n\r\nHow do I get a human at Expedia?\r\nHow Do I Get a Human at Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Real-Time Help & Exclusive Travel Deals!Want to speak with a real person at Expedia? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now for immediate support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Skip the wait, get fast answers, and enjoy limited-time offers that make your next journey more affordable and stress-free. Call today and save!",
"full_name": "HuMan(Expedia)||How do I get a human at Expedia?",
"introduced_year": 2014,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "HuMan(Expedia)||How do I get a human at Expedia?",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/egocoder-intelligent-program-synthesis-with
|
1805.08747
| null | null |
EgoCoder: Intelligent Program Synthesis with Hierarchical Sequential Neural Network Model
|
Programming has been an important skill for researchers and practitioners in
computer science and other related areas. To learn basic programing skills, a
long-time systematic training is usually required for beginners. According to a
recent market report, the computer software market is expected to continue
expanding at an accelerating speed, but the market supply of qualified software
developers can hardly meet such a huge demand. In recent years, the surge of
text generation research works provides the opportunities to address such a
dilemma through automatic program synthesis. In this paper, we propose to make
our try to solve the program synthesis problem from a data mining perspective.
To address the problem, a novel generative model, namely EgoCoder, will be
introduced in this paper. EgoCoder effectively parses program code into
abstract syntax trees (ASTs), where the tree nodes will contain the program
code/comment content and the tree structure can capture the program logic
flows. Based on a new unit model called Hsu, EgoCoder can effectively capture
both the hierarchical and sequential patterns in the program ASTs. Extensive
experiments will be done to compare EgoCoder with the state-of-the-art text
generation methods, and the experimental results have demonstrated the
effectiveness of EgoCoder in addressing the program synthesis problem.
| null |
http://arxiv.org/abs/1805.08747v1
|
http://arxiv.org/pdf/1805.08747v1.pdf
| null |
[
"Jiawei Zhang",
"Limeng Cui",
"Fisher B. Gouza"
] |
[
"Program Synthesis",
"Text Generation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cascadecnn-pushing-the-performance-limits-of
|
1805.08743
| null | null |
CascadeCNN: Pushing the performance limits of quantisation
|
This work presents CascadeCNN, an automated toolflow that pushes the
quantisation limits of any given CNN model, to perform high-throughput
inference by exploiting the computation time-accuracy trade-off. Without the
need for retraining, a two-stage architecture tailored for any given FPGA
device is generated, consisting of a low- and a high-precision unit. A
confidence evaluation unit is employed between them to identify misclassified
cases at run time and forward them to the high-precision unit or terminate
computation. Experiments demonstrate that CascadeCNN achieves a performance
boost of up to 55% for VGG-16 and 48% for AlexNet over the baseline design for
the same resource budget and accuracy.
| null |
http://arxiv.org/abs/1805.08743v1
|
http://arxiv.org/pdf/1805.08743v1.pdf
| null |
[
"Alexandros Kouris",
"Stylianos I. Venieris",
"Christos-Savvas Bouganis"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13",
"description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor",
"full_name": "Local Response Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Local Response Normalization",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40",
"description": "To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.ggfdf\r\n\r\n\r\nHow do I speak to a person at Expedia?How do I speak to a person at Expedia?To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.\r\n\r\n\r\n\r\nTo make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.chgd",
"full_name": "How do I speak to a person at Expedia?-/+/",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "How do I speak to a person at Expedia?-/+/",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
}
] |
https://paperswithcode.com/paper/unibuckernel-a-kernel-based-learning-method
|
1803.07602
| null | null |
UnibucKernel: A kernel-based learning method for complex word identification
|
In this paper, we present a kernel-based learning approach for the 2018
Complex Word Identification (CWI) Shared Task. Our approach is based on
combining multiple low-level features, such as character n-grams, with
high-level semantic features that are either automatically learned using word
embeddings or extracted from a lexical knowledge base, namely WordNet. After
feature extraction, we employ a kernel method for the learning phase. The
feature matrix is first transformed into a normalized kernel matrix. For the
binary classification task (simple versus complex), we employ Support Vector
Machines. For the regression task, in which we have to predict the complexity
level of a word (a word is more complex if it is labeled as complex by more
annotators), we employ v-Support Vector Regression. We applied our approach
only on the three English data sets containing documents from Wikipedia,
WikiNews and News domains. Our best result during the competition was the third
place on the English Wikipedia data set. However, in this paper, we also report
better post-competition results.
| null |
http://arxiv.org/abs/1803.07602v4
|
http://arxiv.org/pdf/1803.07602v4.pdf
|
WS 2018 6
|
[
"Andrei M. Butnaru",
"Radu Tudor Ionescu"
] |
[
"Binary Classification",
"Complex Word Identification",
"regression",
"Word Embeddings"
] | 2018-03-20T00:00:00 |
https://aclanthology.org/W18-0519
|
https://aclanthology.org/W18-0519.pdf
|
unibuckernel-a-kernel-based-learning-method-1
| null |
[] |
https://paperswithcode.com/paper/adversarially-robust-training-through
|
1805.08736
| null |
HyxBpoR5tm
|
Adversarially Robust Training through Structured Gradient Regularization
|
We propose a novel data-dependent structured gradient regularizer to increase
the robustness of neural networks vis-a-vis adversarial perturbations. Our
regularizer can be derived as a controlled approximation from first principles,
leveraging the fundamental link between training with noise and regularization.
It adds very little computational overhead during learning and is simple to
implement generically in standard deep learning frameworks. Our experiments
provide strong evidence that structured gradient regularization can act as an
effective first line of defense against attacks based on low-level signal
corruption.
| null |
http://arxiv.org/abs/1805.08736v1
|
http://arxiv.org/pdf/1805.08736v1.pdf
| null |
[
"Kevin Roth",
"Aurelien Lucchi",
"Sebastian Nowozin",
"Thomas Hofmann"
] |
[] | 2018-05-22T00:00:00 |
https://openreview.net/forum?id=HyxBpoR5tm
|
https://openreview.net/pdf?id=HyxBpoR5tm
| null | null |
[] |
https://paperswithcode.com/paper/efficient-stochastic-gradient-descent-for
|
1805.08728
| null | null |
Efficient Stochastic Gradient Descent for Learning with Distributionally Robust Optimization
|
Distributionally robust optimization (DRO) problems are increasingly seen as a viable method to train machine learning models for improved model generalization. These min-max formulations, however, are more difficult to solve. We therefore provide a new stochastic gradient descent algorithm to efficiently solve this DRO formulation. Our approach applies gradient descent to the outer minimization formulation and estimates the gradient of the inner maximization based on a sample average approximation. The latter uses a subset of the data in each iteration, progressively increasing the subset size to ensure convergence. Theoretical results include establishing the optimal manner for growing the support size to balance a fundamental tradeoff between stochastic error and computational effort. Empirical results demonstrate the significant benefits of our approach over previous work, and also illustrate how learning with DRO can improve generalization.
| null |
https://arxiv.org/abs/1805.08728v2
|
https://arxiv.org/pdf/1805.08728v2.pdf
| null |
[
"Soumyadip Ghosh",
"Mark Squillante",
"Ebisa Wollega"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-training-of-word2vec-for-basket
|
1805.08720
| null | null |
Adversarial Training of Word2Vec for Basket Completion
|
In recent years, the Word2Vec model trained with the Negative Sampling loss
function has shown state-of-the-art results in a number of machine learning
tasks, including language modeling tasks, such as word analogy and word
similarity, and in recommendation tasks, through Prod2Vec, an extension that
applies to modeling user shopping activity and user preferences. Several
methods that aim to improve upon the standard Negative Sampling loss have been
proposed. In our paper we pursue more sophisticated Negative Sampling, by
leveraging ideas from the field of Generative Adversarial Networks (GANs), and
propose Adversarial Negative Sampling. We build upon the recent progress made
in stabilizing the training objective of GANs in the discrete data setting, and
introduce a new GAN-Word2Vec model.We evaluate our model on the task of basket
completion, and show significant improvements in performance over Word2Vec
trained using standard loss functions, including Noise Contrastive Estimation
and Negative Sampling.
| null |
http://arxiv.org/abs/1805.08720v1
|
http://arxiv.org/pdf/1805.08720v1.pdf
| null |
[
"Ugo Tanielian",
"Mike Gartrell",
"Flavian vasile"
] |
[
"Language Modeling",
"Language Modelling",
"Word Similarity"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/parsimonious-bayesian-deep-networks
|
1805.08719
| null | null |
Parsimonious Bayesian deep networks
|
Combining Bayesian nonparametrics and a forward model selection strategy, we
construct parsimonious Bayesian deep networks (PBDNs) that infer
capacity-regularized network architectures from the data and require neither
cross-validation nor fine-tuning when training the model. One of the two
essential components of a PBDN is the development of a special infinite-wide
single-hidden-layer neural network, whose number of active hidden units can be
inferred from the data. The other one is the construction of a greedy
layer-wise learning algorithm that uses a forward model selection criterion to
determine when to stop adding another hidden layer. We develop both Gibbs
sampling and stochastic gradient descent based maximum a posteriori inference
for PBDNs, providing state-of-the-art classification accuracy and interpretable
data subtypes near the decision boundaries, while maintaining low computational
complexity for out-of-sample prediction.
|
Combining Bayesian nonparametrics and a forward model selection strategy, we construct parsimonious Bayesian deep networks (PBDNs) that infer capacity-regularized network architectures from the data and require neither cross-validation nor fine-tuning when training the model.
|
http://arxiv.org/abs/1805.08719v3
|
http://arxiv.org/pdf/1805.08719v3.pdf
|
NeurIPS 2018 12
|
[
"Mingyuan Zhou"
] |
[
"Model Selection"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7581-parsimonious-bayesian-deep-networks
|
http://papers.nips.cc/paper/7581-parsimonious-bayesian-deep-networks.pdf
|
parsimonious-bayesian-deep-networks-1
| null |
[] |
https://paperswithcode.com/paper/automatic-adaptation-of-person-association
|
1805.08717
| null | null |
Self-supervised Multi-view Person Association and Its Applications
|
Reliable markerless motion tracking of people participating in a complex group activity from multiple moving cameras is challenging due to frequent occlusions, strong viewpoint and appearance variations, and asynchronous video streams. To solve this problem, reliable association of the same person across distant viewpoints and temporal instances is essential. We present a self-supervised framework to adapt a generic person appearance descriptor to the unlabeled videos by exploiting motion tracking, mutual exclusion constraints, and multi-view geometry. The adapted discriminative descriptor is used in a tracking-by-clustering formulation. We validate the effectiveness of our descriptor learning on WILDTRACK [14] and three new complex social scenes captured by multiple cameras with up to 60 people "in the wild". We report significant improvement in association accuracy (up to 18%) and stable and coherent 3D human skeleton tracking (5 to 10 times) over the baseline. Using the reconstructed 3D skeletons, we cut the input videos into a multi-angle video where the image of a specified person is shown from the best visible front-facing camera. Our algorithm detects inter-human occlusion to determine the camera switching moment while still maintaining the flow of the action well.
| null |
https://arxiv.org/abs/1805.08717v3
|
https://arxiv.org/pdf/1805.08717v3.pdf
| null |
[
"Minh Vo",
"Ersin Yumer",
"Kalyan Sunkavalli",
"Sunil Hadap",
"Yaser Sheikh",
"Srinivasa Narasimhan"
] |
[
"Clustering"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/normalization-of-transliterated-words-in-code
|
1805.08701
| null | null |
Normalization of Transliterated Words in Code-Mixed Data Using Seq2Seq Model & Levenshtein Distance
|
Building tools for code-mixed data is rapidly gaining popularity in the NLP
research community as such data is exponentially rising on social media.
Working with code-mixed data contains several challenges, especially due to
grammatical inconsistencies and spelling variations in addition to all the
previous known challenges for social media scenarios. In this article, we
present a novel architecture focusing on normalizing phonetic typing
variations, which is commonly seen in code-mixed data. One of the main features
of our architecture is that in addition to normalizing, it can also be utilized
for back-transliteration and word identification in some cases. Our model
achieved an accuracy of 90.27% on the test data.
| null |
http://arxiv.org/abs/1805.08701v1
|
http://arxiv.org/pdf/1805.08701v1.pdf
|
WS 2018 11
|
[
"Soumil Mandal",
"Karthick Nanmaran"
] |
[
"Transliteration"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-repair-software-vulnerabilities
|
1805.07475
| null | null |
Learning to Repair Software Vulnerabilities with Generative Adversarial Networks
|
Motivated by the problem of automated repair of software vulnerabilities, we
propose an adversarial learning approach that maps from one discrete source
domain to another target domain without requiring paired labeled examples or
source and target domains to be bijections. We demonstrate that the proposed
adversarial learning approach is an effective technique for repairing software
vulnerabilities, performing close to seq2seq approaches that require labeled
pairs. The proposed Generative Adversarial Network approach is
application-agnostic in that it can be applied to other problems similar to
code repair, such as grammar correction or sentiment translation.
| null |
http://arxiv.org/abs/1805.07475v3
|
http://arxiv.org/pdf/1805.07475v3.pdf
|
NeurIPS 2018 12
|
[
"Jacob Harer",
"Onur Ozdemir",
"Tomo Lazovich",
"Christopher P. Reale",
"Rebecca L. Russell",
"Louis Y. Kim",
"Peter Chin"
] |
[
"Code Repair",
"Generative Adversarial Network",
"Translation"
] | 2018-05-18T00:00:00 |
http://papers.nips.cc/paper/8018-learning-to-repair-software-vulnerabilities-with-generative-adversarial-networks
|
http://papers.nips.cc/paper/8018-learning-to-repair-software-vulnerabilities-with-generative-adversarial-networks.pdf
|
learning-to-repair-software-vulnerabilities-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence\r\nfrom that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.\r\n\r\n(Note that this page refers to the original seq2seq not general sequence-to-sequence models)",
"full_name": "Sequence to Sequence",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Sequence To Sequence Models",
"parent": null
},
"name": "Seq2Seq",
"source_title": "Sequence to Sequence Learning with Neural Networks",
"source_url": "http://arxiv.org/abs/1409.3215v3"
}
] |
https://paperswithcode.com/paper/dealing-with-categorical-and-integer-valued
|
1805.03463
| null | null |
Dealing with Categorical and Integer-valued Variables in Bayesian Optimization with Gaussian Processes
|
Bayesian Optimization (BO) methods are useful for optimizing functions that
are expen- sive to evaluate, lack an analytical expression and whose
evaluations can be contaminated by noise. These methods rely on a probabilistic
model of the objective function, typically a Gaussian process (GP), upon which
an acquisition function is built. The acquisition function guides the
optimization process and measures the expected utility of performing an
evaluation of the objective at a new point. GPs assume continous input
variables. When this is not the case, for example when some of the input
variables take categorical or integer values, one has to introduce extra
approximations. Consider a suggested input location taking values in the real
line. Before doing the evaluation of the objective, a common approach is to use
a one hot encoding approximation for categorical variables, or to round to the
closest integer, in the case of integer-valued variables. We show that this can
lead to problems in the optimization process and describe a more principled
approach to account for input variables that are categorical or integer-valued.
We illustrate in both synthetic and a real experiments the utility of our
approach, which significantly improves the results of standard BO methods using
Gaussian processes on problems with categorical or integer-valued variables.
|
We show that this can lead to problems in the optimization process and describe a more principled approach to account for input variables that are categorical or integer-valued.
|
http://arxiv.org/abs/1805.03463v2
|
http://arxiv.org/pdf/1805.03463v2.pdf
| null |
[
"Eduardo C. Garrido-Merchán",
"Daniel Hernández-Lobato"
] |
[
"Bayesian Optimization",
"Gaussian Processes"
] | 2018-05-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/aesthetics-assessment-of-images-containing
|
1805.08685
| null | null |
Aesthetics Assessment of Images Containing Faces
|
Recent research has widely explored the problem of aesthetics assessment of
images with generic content. However, few approaches have been specifically
designed to predict the aesthetic quality of images containing human faces,
which make up a massive portion of photos in the web. This paper introduces a
method for aesthetic quality assessment of images with faces. We exploit three
different Convolutional Neural Networks to encode information regarding
perceptual quality, global image aesthetics, and facial attributes; then, a
model is trained to combine these features to explicitly predict the aesthetics
of images containing faces. Experimental results show that our approach
outperforms existing methods for both binary, i.e. low/high, and continuous
aesthetic score prediction on four different databases in the state-of-the-art.
| null |
http://arxiv.org/abs/1805.08685v1
|
http://arxiv.org/pdf/1805.08685v1.pdf
| null |
[
"Simone Bianco",
"Luigi Celona",
"Raimondo Schettini"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/convexity-shape-prior-for-level-set-based
|
1805.08676
| null | null |
Convexity Shape Prior for Level Set based Image Segmentation Method
|
We propose a geometric convexity shape prior preservation method for
variational level set based image segmentation methods. Our method is built
upon the fact that the level set of a convex signed distanced function must be
convex. This property enables us to transfer a complicated geometrical
convexity prior into a simple inequality constraint on the function. An active
set based Gauss-Seidel iteration is used to handle this constrained
minimization problem to get an efficient algorithm. We apply our method to
region and edge based level set segmentation models including Chan-Vese (CV)
model with guarantee that the segmented region will be convex. Experimental
results show the effectiveness and quality of the proposed model and algorithm.
| null |
http://arxiv.org/abs/1805.08676v1
|
http://arxiv.org/pdf/1805.08676v1.pdf
| null |
[
"Shi Yan",
"Xue-Cheng Tai",
"Jun Liu",
"Hai-yang Huang"
] |
[
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/information-constraints-on-auto-encoding
|
1805.08672
| null | null |
Information Constraints on Auto-Encoding Variational Bayes
|
Parameterizing the approximate posterior of a generative model with neural
networks has become a common theme in recent machine learning research. While
providing appealing flexibility, this approach makes it difficult to impose or
assess structural constraints such as conditional independence. We propose a
framework for learning representations that relies on Auto-Encoding Variational
Bayes and whose search space is constrained via kernel-based measures of
independence. In particular, our method employs the $d$-variable
Hilbert-Schmidt Independence Criterion (dHSIC) to enforce independence between
the latent representations and arbitrary nuisance factors. We show how to apply
this method to a range of problems, including the problems of learning
invariant representations and the learning of interpretable representations. We
also present a full-fledged application to single-cell RNA sequencing
(scRNA-seq). In this setting the biological signal is mixed in complex ways
with sequencing errors and sampling effects. We show that our method
out-performs the state-of-the-art in this domain.
| null |
http://arxiv.org/abs/1805.08672v4
|
http://arxiv.org/pdf/1805.08672v4.pdf
|
NeurIPS 2018 12
|
[
"Romain Lopez",
"Jeffrey Regier",
"Michael. I. Jordan",
"Nir Yosef"
] |
[] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7850-information-constraints-on-auto-encoding-variational-bayes
|
http://papers.nips.cc/paper/7850-information-constraints-on-auto-encoding-variational-bayes.pdf
|
information-constraints-on-auto-encoding-1
| null |
[] |
https://paperswithcode.com/paper/learning-rules-first-classifiers
|
1803.03155
| null | null |
Learning Rules-First Classifiers
|
Complex classifiers may exhibit "embarassing" failures in cases where humans can easily provide a justified classification. Avoiding such failures is obviously of key importance. In this work, we focus on one such setting, where a label is perfectly predictable if the input contains certain features, or rules, and otherwise it is predictable by a linear classifier. We define a hypothesis class that captures this notion and determine its sample complexity. We also give evidence that efficient algorithms cannot achieve this sample complexity. We then derive a simple and efficient algorithm and show that its sample complexity is close to optimal, among efficient algorithms. Experiments on synthetic and sentiment analysis data demonstrate the efficacy of the method, both in terms of accuracy and interpretability.
| null |
https://arxiv.org/abs/1803.03155v4
|
https://arxiv.org/pdf/1803.03155v4.pdf
| null |
[
"Deborah Cohen",
"Amit Daniely",
"Amir Globerson",
"Gal Elidan"
] |
[
"General Classification",
"Sentiment Analysis"
] | 2018-03-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adding-one-neuron-can-eliminate-all-bad-local
|
1805.08671
| null | null |
Adding One Neuron Can Eliminate All Bad Local Minima
|
One of the main difficulties in analyzing neural networks is the
non-convexity of the loss function which may have many bad local minima.
In this paper, we study the landscape of neural networks for binary
classification tasks. Under mild assumptions, we prove that after adding one
special neuron with a skip connection to the output, or one special neuron per
layer, every local minimum is a global minimum.
| null |
http://arxiv.org/abs/1805.08671v1
|
http://arxiv.org/pdf/1805.08671v1.pdf
|
NeurIPS 2018 12
|
[
"Shiyu Liang",
"Ruoyu Sun",
"Jason D. Lee",
"R. Srikant"
] |
[
"All",
"Binary Classification",
"General Classification"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7688-adding-one-neuron-can-eliminate-all-bad-local-minima
|
http://papers.nips.cc/paper/7688-adding-one-neuron-can-eliminate-all-bad-local-minima.pdf
|
adding-one-neuron-can-eliminate-all-bad-local-1
| null |
[] |
https://paperswithcode.com/paper/structured-bayesian-gaussian-process-latent-1
|
1805.08665
| null | null |
Structured Bayesian Gaussian process latent variable model
|
We introduce a Bayesian Gaussian process latent variable model that
explicitly captures spatial correlations in data using a parameterized spatial
kernel and leveraging structure-exploiting algebra on the model covariance
matrices for computational tractability. Inference is made tractable through a
collapsed variational bound with similar computational complexity to that of
the traditional Bayesian GP-LVM. Inference over partially-observed test cases
is achieved by optimizing a "partially-collapsed" bound. Modeling
high-dimensional time series systems is enabled through use of a dynamical GP
latent variable prior. Examples imputing missing data on images and
super-resolution imputation of missing video frames demonstrate the model.
| null |
http://arxiv.org/abs/1805.08665v1
|
http://arxiv.org/pdf/1805.08665v1.pdf
| null |
[
"Steven Atkinson",
"Nicholas Zabaras"
] |
[
"Imputation",
"model",
"Super-Resolution",
"Time Series",
"Time Series Analysis"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/coco-cn-for-cross-lingual-image-tagging
|
1805.08661
| null | null |
COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval
|
This paper contributes to cross-lingual image annotation and retrieval in
terms of data and baseline methods. We propose COCO-CN, a novel dataset
enriching MS-COCO with manually written Chinese sentences and tags. For more
effective annotation acquisition, we develop a recommendation-assisted
collective annotation system, automatically providing an annotator with several
tags and sentences deemed to be relevant with respect to the pictorial content.
Having 20,342 images annotated with 27,218 Chinese sentences and 70,993 tags,
COCO-CN is currently the largest Chinese-English dataset that provides a
unified and challenging platform for cross-lingual image tagging, captioning
and retrieval. We develop conceptually simple yet effective methods per task
for learning from cross-lingual resources. Extensive experiments on the three
tasks justify the viability of the proposed dataset and methods. Data and code
are publicly available at https://github.com/li-xirong/coco-cn
|
This paper contributes to cross-lingual image annotation and retrieval in terms of data and baseline methods.
|
http://arxiv.org/abs/1805.08661v2
|
http://arxiv.org/pdf/1805.08661v2.pdf
| null |
[
"Xirong Li",
"Chaoxi Xu",
"Xiaoxu Wang",
"Weiyu Lan",
"Zhengxiong Jia",
"Gang Yang",
"Jieping Xu"
] |
[
"Retrieval"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multimodal-affective-analysis-using
|
1805.08660
| null | null |
Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment
|
Multimodal affective computing, learning to recognize and interpret human
affects and subjective information from multiple data sources, is still
challenging because: (i) it is hard to extract informative features to
represent human affects from heterogeneous inputs; (ii) current fusion
strategies only fuse different modalities at abstract level, ignoring
time-dependent interactions between modalities. Addressing such issues, we
introduce a hierarchical multimodal architecture with attention and word-level
fusion to classify utter-ance-level sentiment and emotion from text and audio
data. Our introduced model outperforms the state-of-the-art approaches on
published datasets and we demonstrated that our model is able to visualize and
interpret the synchronized attention over modalities.
| null |
http://arxiv.org/abs/1805.08660v1
|
http://arxiv.org/pdf/1805.08660v1.pdf
|
ACL 2018 7
|
[
"Yue Gu",
"Kangning Yang",
"Shiyu Fu",
"Shuhong Chen",
"Xinyu Li",
"Ivan Marsic"
] |
[] | 2018-05-22T00:00:00 |
https://aclanthology.org/P18-1207
|
https://aclanthology.org/P18-1207.pdf
|
multimodal-affective-analysis-using-1
| null |
[] |
https://paperswithcode.com/paper/ffdnet-toward-a-fast-and-flexible-solution
|
1710.04026
| null | null |
FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising
|
Due to the fast inference and good performance, discriminative learning
methods have been widely studied in image denoising. However, these methods
mostly learn a specific model for each noise level, and require multiple models
for denoising images with different noise levels. They also lack flexibility to
deal with spatially variant noise, limiting their applications in practical
denoising. To address these issues, we present a fast and flexible denoising
convolutional neural network, namely FFDNet, with a tunable noise level map as
the input. The proposed FFDNet works on downsampled sub-images, achieving a
good trade-off between inference speed and denoising performance. In contrast
to the existing discriminative denoisers, FFDNet enjoys several desirable
properties, including (i) the ability to handle a wide range of noise levels
(i.e., [0, 75]) effectively with a single network, (ii) the ability to remove
spatially variant noise by specifying a non-uniform noise level map, and (iii)
faster speed than benchmark BM3D even on CPU without sacrificing denoising
performance. Extensive experiments on synthetic and real noisy images are
conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The
results show that FFDNet is effective and efficient, making it highly
attractive for practical denoising applications.
|
Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising.
|
http://arxiv.org/abs/1710.04026v2
|
http://arxiv.org/pdf/1710.04026v2.pdf
| null |
[
"Kai Zhang",
"WangMeng Zuo",
"Lei Zhang"
] |
[
"Color Image Denoising",
"CPU",
"Denoising",
"Image Denoising"
] | 2017-10-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/robust-conditional-generative-adversarial
|
1805.08657
| null |
Byg0DsCqYQ
|
Robust Conditional Generative Adversarial Networks
|
Conditional generative adversarial networks (cGAN) have led to large
improvements in the task of conditional image generation, which lies at the
heart of computer vision. The major focus so far has been on performance
improvement, while there has been little effort in making cGAN more robust to
noise. The regression (of the generator) might lead to arbitrarily large errors
in the output, which makes cGAN unreliable for real-world applications. In this
work, we introduce a novel conditional GAN model, called RoCGAN, which
leverages structure in the target space of the model to address the issue. Our
model augments the generator with an unsupervised pathway, which promotes the
outputs of the generator to span the target manifold even in the presence of
intense noise. We prove that RoCGAN share similar theoretical properties as GAN
and experimentally verify that our model outperforms existing state-of-the-art
cGAN architectures by a large margin in a variety of domains including images
from natural scenes and faces.
|
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision.
|
http://arxiv.org/abs/1805.08657v2
|
http://arxiv.org/pdf/1805.08657v2.pdf
|
ICLR 2019 5
|
[
"Grigorios G. Chrysos",
"Jean Kossaifi",
"Stefanos Zafeiriou"
] |
[
"Conditional Image Generation",
"Image Generation"
] | 2018-05-22T00:00:00 |
https://openreview.net/forum?id=Byg0DsCqYQ
|
https://openreview.net/pdf?id=Byg0DsCqYQ
|
robust-conditional-generative-adversarial-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/lmkl-net-a-fast-localized-multiple-kernel
|
1805.08656
| null | null |
LMKL-Net: A Fast Localized Multiple Kernel Learning Solver via Deep Neural Networks
|
In this paper we propose solving localized multiple kernel learning (LMKL)
using LMKL-Net, a feedforward deep neural network. In contrast to previous
works, as a learning principle we propose {\em parameterizing} both the gating
function for learning kernel combination weights and the multiclass classifier
in LMKL using an attentional network (AN) and a multilayer perceptron (MLP),
respectively. In this way we can learn the (nonlinear) decision function in
LMKL (approximately) by sequential applications of AN and MLP. Empirically on
benchmark datasets we demonstrate that overall LMKL-Net can not only outperform
the state-of-the-art MKL solvers in terms of accuracy, but also be trained
about {\em two orders of magnitude} faster with much smaller memory footprint
for large-scale learning.
| null |
http://arxiv.org/abs/1805.08656v1
|
http://arxiv.org/pdf/1805.08656v1.pdf
| null |
[
"Ziming Zhang"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/universal-discriminative-quantum-neural
|
1805.08654
| null |
r1lgm3C5t7
|
Universal discriminative quantum neural networks
|
Quantum mechanics fundamentally forbids deterministic discrimination of
quantum states and processes. However, the ability to optimally distinguish
various classes of quantum data is an important primitive in quantum
information science. In this work, we train near-term quantum circuits to
classify data represented by non-orthogonal quantum probability distributions
using the Adam stochastic optimization algorithm. This is achieved by iterative
interactions of a classical device with a quantum processor to discover the
parameters of an unknown non-unitary quantum circuit. This circuit learns to
simulates the unknown structure of a generalized quantum measurement, or
Positive-Operator-Value-Measure (POVM), that is required to optimally
distinguish possible distributions of quantum inputs. Notably we use universal
circuit topologies, with a theoretically motivated circuit design, which
guarantees that our circuits can in principle learn to perform arbitrary
input-output mappings. Our numerical simulations show that shallow quantum
circuits could be trained to discriminate among various pure and mixed quantum
states exhibiting a trade-off between minimizing erroneous and inconclusive
outcomes with comparable performance to theoretically optimal POVMs. We train
the circuit on different classes of quantum data and evaluate the
generalization error on unseen mixed quantum states. This generalization power
hence distinguishes our work from standard circuit optimization and provides an
example of quantum machine learning for a task that has inherently no classical
analogue.
| null |
http://arxiv.org/abs/1805.08654v1
|
http://arxiv.org/pdf/1805.08654v1.pdf
|
ICLR 2019 5
|
[
"Hongxiang Chen",
"Leonard Wossnig",
"Simone Severini",
"Hartmut Neven",
"Masoud Mohseni"
] |
[
"Quantum Machine Learning",
"Stochastic Optimization"
] | 2018-05-22T00:00:00 |
https://openreview.net/forum?id=r1lgm3C5t7
|
https://openreview.net/pdf?id=r1lgm3C5t7
|
universal-discriminative-quantum-neural-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
}
] |
https://paperswithcode.com/paper/visual-explanation-by-interpretation
|
1712.06302
| null |
H1ziPjC5Fm
|
Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks
|
Interpretation and explanation of deep models is critical towards wide
adoption of systems that rely on them. In this paper, we propose a novel scheme
for both interpretation as well as explanation in which, given a pretrained
model, we automatically identify internal features relevant for the set of
classes considered by the model, without relying on additional annotations. We
interpret the model through average visualizations of this reduced set of
features. Then, at test time, we explain the network prediction by accompanying
the predicted class label with supporting visualizations derived from the
identified features. In addition, we propose a method to address the artifacts
introduced by stridded operations in deconvNet-based visualizations. Moreover,
we introduce an8Flower, a dataset specifically designed for objective
quantitative evaluation of methods for visual explanation.Experiments on the
MNIST,ILSVRC12,Fashion144k and an8Flower datasets show that our method produces
detailed explanations with good coverage of relevant features of the classes of
interest
| null |
http://arxiv.org/abs/1712.06302v3
|
http://arxiv.org/pdf/1712.06302v3.pdf
|
ICLR 2019 5
|
[
"Jose Oramas",
"Kaili Wang",
"Tinne Tuytelaars"
] |
[] | 2017-12-18T00:00:00 |
https://openreview.net/forum?id=H1ziPjC5Fm
|
https://openreview.net/pdf?id=H1ziPjC5Fm
|
visual-explanation-by-interpretation-1
| null |
[] |
https://paperswithcode.com/paper/neural-networks-as-interacting-particle
|
1805.00915
| null | null |
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
|
Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, but rigorous results about the approximation error of neural networks after training are few. Here we establish conditions for global convergence of the standard optimization algorithm used in machine learning applications, stochastic gradient descent (SGD), and quantify the scaling of its error with the size of the network. This is done by reinterpreting SGD as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number $n$ of units is large, the empirical distribution of the particles descends on a convex landscape towards the global minimum at a rate independent of $n$, with a resulting approximation error that universally scales as $O(n^{-1})$. These properties are established in the form of a Law of Large Numbers and a Central Limit Theorem for the empirical distribution. Our analysis also quantifies the scale and nature of the noise introduced by SGD and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural networks to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$.
| null |
https://arxiv.org/abs/1805.00915v3
|
https://arxiv.org/pdf/1805.00915v3.pdf
| null |
[
"Grant M. Rotskoff",
"Eric Vanden-Eijnden"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-02T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/nonlinear-ica-using-auxiliary-variables-and
|
1805.08651
| null | null |
Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning
|
Nonlinear ICA is a fundamental problem for unsupervised representation
learning, emphasizing the capacity to recover the underlying latent variables
generating the data (i.e., identifiability). Recently, the very first
identifiability proofs for nonlinear ICA have been proposed, leveraging the
temporal structure of the independent components. Here, we propose a general
framework for nonlinear ICA, which, as a special case, can make use of temporal
structure. It is based on augmenting the data by an auxiliary variable, such as
the time index, the history of the time series, or any other available
information. We propose to learn nonlinear ICA by discriminating between true
augmented data, or data in which the auxiliary variable has been randomized.
This enables the framework to be implemented algorithmically through logistic
regression, possibly in a neural network. We provide a comprehensive proof of
the identifiability of the model as well as the consistency of our estimation
method. The approach not only provides a general theoretical framework
combining and generalizing previously proposed nonlinear ICA models and
algorithms, but also brings practical advantages.
|
Here, we propose a general framework for nonlinear ICA, which, as a special case, can make use of temporal structure.
|
http://arxiv.org/abs/1805.08651v3
|
http://arxiv.org/pdf/1805.08651v3.pdf
| null |
[
"Aapo Hyvarinen",
"Hiroaki Sasaki",
"Richard E. Turner"
] |
[
"Contrastive Learning",
"Representation Learning",
"Time Series",
"Time Series Analysis"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "_**Independent component analysis** (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals._\r\n\r\n_ICA defines a generative model for the observed multivariate data, which is typically given as a large database of samples. In the model, the data variables are assumed to be linear mixtures of some unknown latent variables, and the mixing system is also unknown. The latent variables are assumed nongaussian and mutually independent, and they are called the independent components of the observed data. These independent components, also called sources or factors, can be found by ICA._\r\n\r\n_ICA is superficially related to principal component analysis and factor analysis. ICA is a much more powerful technique, however, capable of finding the underlying factors or sources when these classic methods fail completely._\r\n\r\n\r\nExtracted from (https://www.cs.helsinki.fi/u/ahyvarin/whatisica.shtml)\r\n\r\n**Source papers**:\r\n\r\n[Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture](https://doi.org/10.1016/0165-1684(91)90079-X)\r\n\r\n[Independent component analysis, A new concept?](https://doi.org/10.1016/0165-1684(94)90029-9)\r\n\r\n[Independent component analysis: algorithms and applications](https://doi.org/10.1016/S0893-6080(00)00026-5)",
"full_name": "Independent Component Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "ICA",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/constructing-compact-brain-connectomes-for
|
1805.08649
| null | null |
Constructing Compact Brain Connectomes for Individual Fingerprinting
|
Recent neuroimaging studies have shown that functional connectomes are unique to individuals, i.e., two distinct fMRIs taken over different sessions of the same subject are more similar in terms of their connectomes than those from two different subjects. In this study, we present significant new results that identify, for the first time, specific parts of resting-state and task-specific connectomes that code the unique signatures. We show that a very small part of the connectome codes the signatures. A network of these features is shown to achieve excellent training and test accuracy in matching imaging datasets. We show that these features are statistically significant, robust to perturbations, invariant across populations, and are localized to a small number of structural regions of the brain. Furthermore, we show that for task-specific connectomes, the regions identified by our method are consistent with their known functional characterization. We present a new matrix sampling technique to derive computationally efficient and accurate methods for identifying the discriminating sub-connectome and support all of our claims using state-of-the-art statistical tests and computational techniques.
| null |
https://arxiv.org/abs/1805.08649v2
|
https://arxiv.org/pdf/1805.08649v2.pdf
| null |
[
"Vikram Ravindra",
"Petros Drineas",
"Ananth Grama"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-statistic-approximate-bayesian
|
1805.08647
| null | null |
Multi-Statistic Approximate Bayesian Computation with Multi-Armed Bandits
|
Approximate Bayesian computation is an established and popular method for
likelihood-free inference with applications in many disciplines. The
effectiveness of the method depends critically on the availability of well
performing summary statistics. Summary statistic selection relies heavily on
domain knowledge and carefully engineered features, and can be a laborious time
consuming process. Since the method is sensitive to data dimensionality, the
process of selecting summary statistics must balance the need to include
informative statistics and the dimensionality of the feature vector. This paper
proposes to treat the problem of dynamically selecting an appropriate summary
statistic from a given pool of candidate summary statistics as a multi-armed
bandit problem. This allows approximate Bayesian computation rejection sampling
to dynamically focus on a distribution over well performing summary statistics
as opposed to a fixed set of statistics. The proposed method is unique in that
it does not require any pre-processing and is scalable to a large number of
candidate statistics. This enables efficient use of a large library of possible
time series summary statistics without prior feature engineering. The proposed
approach is compared to state-of-the-art methods for summary statistics
selection using a challenging test problem from the systems biology literature.
| null |
http://arxiv.org/abs/1805.08647v1
|
http://arxiv.org/pdf/1805.08647v1.pdf
| null |
[
"Prashant Singh",
"Andreas Hellander"
] |
[
"Feature Engineering",
"Multi-Armed Bandits",
"Time Series",
"Time Series Analysis"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-unsupervised-automatic-speech
|
1803.10952
| null | null |
Towards Unsupervised Automatic Speech Recognition Trained by Unaligned Speech and Text only
|
Automatic speech recognition (ASR) has been widely researched with supervised
approaches, while many low-resourced languages lack audio-text aligned data,
and supervised methods cannot be applied on them.
In this work, we propose a framework to achieve unsupervised ASR on a read
English speech dataset, where audio and text are unaligned. In the first stage,
each word-level audio segment in the utterances is represented by a vector
representation extracted by a sequence-of-sequence autoencoder, in which
phonetic information and speaker information are disentangled.
Secondly, semantic embeddings of audio segments are trained from the vector
representations using a skip-gram model. Last but not the least, an
unsupervised method is utilized to transform semantic embeddings of audio
segments to text embedding space, and finally the transformed embeddings are
mapped to words.
With the above framework, we are towards unsupervised ASR trained by
unaligned text and speech only.
| null |
http://arxiv.org/abs/1803.10952v3
|
http://arxiv.org/pdf/1803.10952v3.pdf
| null |
[
"Yi-Chen Chen",
"Chia-Hao Shen",
"Sung-Feng Huang",
"Hung-Yi Lee"
] |
[
"Automatic Speech Recognition",
"Automatic Speech Recognition (ASR)",
"speech-recognition",
"Speech Recognition"
] | 2018-03-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cost-aware-cascading-bandits
|
1805.08638
| null | null |
Cost-aware Cascading Bandits
|
In this paper, we propose a cost-aware cascading bandits model, a new variant
of multi-armed ban- dits with cascading feedback, by considering the random
cost of pulling arms. In each step, the learning agent chooses an ordered list
of items and examines them sequentially, until certain stopping condition is
satisfied. Our objective is then to max- imize the expected net reward in each
step, i.e., the reward obtained in each step minus the total cost in- curred in
examining the items, by deciding the or- dered list of items, as well as when
to stop examina- tion. We study both the offline and online settings, depending
on whether the state and cost statistics of the items are known beforehand. For
the of- fline setting, we show that the Unit Cost Ranking with Threshold 1
(UCR-T1) policy is optimal. For the online setting, we propose a Cost-aware
Cas- cading Upper Confidence Bound (CC-UCB) algo- rithm, and show that the
cumulative regret scales in O(log T ). We also provide a lower bound for all
{\alpha}-consistent policies, which scales in {\Omega}(log T ) and matches our
upper bound. The performance of the CC-UCB algorithm is evaluated with both
synthetic and real-world data.
| null |
http://arxiv.org/abs/1805.08638v1
|
http://arxiv.org/pdf/1805.08638v1.pdf
| null |
[
"Ruida Zhou",
"Chao Gan",
"Jing Yan",
"Cong Shen"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/crossmodal-attentive-skill-learner
|
1711.10314
| null | null |
Crossmodal Attentive Skill Learner
|
This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated
with the recently-introduced Asynchronous Advantage Option-Critic (A2OC)
architecture [Harb et al., 2017] to enable hierarchical reinforcement learning
across multiple sensory inputs. We provide concrete examples where the approach
not only improves performance in a single task, but accelerates transfer to new
tasks. We demonstrate the attention mechanism anticipates and identifies useful
latent features, while filtering irrelevant sensor modalities during execution.
We modify the Arcade Learning Environment [Bellemare et al., 2013] to support
audio queries, and conduct evaluations of crossmodal learning in the Atari 2600
game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017],
we open-source a fast hybrid CPU-GPU implementation of CASL.
|
This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated with the recently-introduced Asynchronous Advantage Option-Critic (A2OC) architecture [Harb et al., 2017] to enable hierarchical reinforcement learning across multiple sensory inputs.
|
http://arxiv.org/abs/1711.10314v3
|
http://arxiv.org/pdf/1711.10314v3.pdf
| null |
[
"Shayegan Omidshafiei",
"Dong-Ki Kim",
"Jason Pazis",
"Jonathan P. How"
] |
[
"Atari Games",
"CPU",
"GPU",
"Hierarchical Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2017-11-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-inference-on-embedded-devices
|
1805.08624
| null | null |
Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit
|
Performing the inference step of deep learning in resource constrained
environments, such as embedded devices, is challenging. Success requires
optimization at both software and hardware levels. Low precision arithmetic and
specifically low precision fixed-point number systems have become the standard
for performing deep learning inference. However, representing non-uniform data
and distributed parameters (e.g. weights) by using uniformly distributed
fixed-point values is still a major drawback when using this number system.
Recently, the posit number system was proposed, which represents numbers in a
non-uniform manner. Therefore, in this paper we are motivated to explore using
the posit number system to represent the weights of Deep Convolutional Neural
Networks. However, we do not apply any quantization techniques and hence the
network weights do not require re-training. The results of this exploration
show that using the posit number system outperformed the fixed point number
system in terms of accuracy and memory utilization.
| null |
http://arxiv.org/abs/1805.08624v1
|
http://arxiv.org/pdf/1805.08624v1.pdf
| null |
[
"Seyed H. F. Langroudi",
"Tej Pandit",
"Dhireesha Kudithipudi"
] |
[
"Deep Learning",
"Quantization"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimization-fast-and-slow-optimally
|
1805.08610
| null | null |
Optimization, fast and slow: optimally switching between local and Bayesian optimization
|
We develop the first Bayesian Optimization algorithm, BLOSSOM, which selects
between multiple alternative acquisition functions and traditional local
optimization at each step. This is combined with a novel stopping condition
based on expected regret. This pairing allows us to obtain the best
characteristics of both local and Bayesian optimization, making efficient use
of function evaluations while yielding superior convergence to the global
minimum on a selection of optimization problems, and also halting optimization
once a principled and intuitive stopping condition has been fulfilled.
|
We develop the first Bayesian Optimization algorithm, BLOSSOM, which selects between multiple alternative acquisition functions and traditional local optimization at each step.
|
http://arxiv.org/abs/1805.08610v1
|
http://arxiv.org/pdf/1805.08610v1.pdf
|
ICML 2018 7
|
[
"Mark McLeod",
"Michael A. Osborne",
"Stephen J. Roberts"
] |
[
"Bayesian Optimization"
] | 2018-05-22T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2224
|
http://proceedings.mlr.press/v80/mcleod18a/mcleod18a.pdf
|
optimization-fast-and-slow-optimally-1
| null |
[] |
https://paperswithcode.com/paper/learning-to-teach-in-cooperative-multiagent
|
1805.07830
| null | null |
Learning to Teach in Cooperative Multiagent Reinforcement Learning
|
Collective human knowledge has clearly benefited from the fact that
innovations by individuals are taught to others through communication. Similar
to human social groups, agents in distributed learning systems would likely
benefit from communication to share knowledge and teach skills. The problem of
teaching to improve agent learning has been investigated by prior works, but
these approaches make assumptions that prevent application of teaching to
general multiagent problems, or require domain expertise for problems they can
apply to. This learning to teach problem has inherent complexities related to
measuring long-term impacts of teaching that compound the standard multiagent
coordination challenges. In contrast to existing works, this paper presents the
first general framework and algorithm for intelligent agents to learn to teach
in a multiagent environment. Our algorithm, Learning to Coordinate and Teach
Reinforcement (LeCTR), addresses peer-to-peer teaching in cooperative
multiagent reinforcement learning. Each agent in our approach learns both when
and what to advise, then uses the received advice to improve local learning.
Importantly, these roles are not fixed; these agents learn to assume the role
of student and/or teacher at the appropriate moments, requesting and providing
advice in order to improve teamwide performance and learning. Empirical
comparisons against state-of-the-art teaching methods show that our teaching
agents not only learn significantly faster, but also learn to coordinate in
tasks where existing methods fail.
| null |
http://arxiv.org/abs/1805.07830v4
|
http://arxiv.org/pdf/1805.07830v4.pdf
| null |
[
"Shayegan Omidshafiei",
"Dong-Ki Kim",
"Miao Liu",
"Gerald Tesauro",
"Matthew Riemer",
"Christopher Amato",
"Murray Campbell",
"Jonathan P. How"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/confounding-robust-policy-improvement
|
1805.08593
| null | null |
Confounding-Robust Policy Improvement
|
We study the problem of learning personalized decision policies from observational data while accounting for possible unobserved confounding. Previous approaches, which assume unconfoundedness, i.e., that no unobserved confounders affect both the treatment assignment as well as outcome, can lead to policies that introduce harm rather than benefit when some unobserved confounding is present, as is generally the case with observational data. Instead, since policy value and regret may not be point-identifiable, we study a method that minimizes the worst-case estimated regret of a candidate policy against a baseline policy over an uncertainty set for propensity weights that controls the extent of unobserved confounding. We prove generalization guarantees that ensure our policy will be safe when applied in practice and will in fact obtain the best-possible uniform control on the range of all possible population regrets that agree with the possible extent of confounding. We develop efficient algorithmic solutions to compute this confounding-robust policy. Finally, we assess and compare our methods on synthetic and semi-synthetic data. In particular, we consider a case study on personalizing hormone replacement therapy based on observational data, where we validate our results on a randomized experiment. We demonstrate that hidden confounding can hinder existing policy learning approaches and lead to unwarranted harm, while our robust approach guarantees safety and focuses on well-evidenced improvement, a necessity for making personalized treatment policies learned from observational data reliable in practice.
| null |
https://arxiv.org/abs/1805.08593v3
|
https://arxiv.org/pdf/1805.08593v3.pdf
|
NeurIPS 2018 12
|
[
"Nathan Kallus",
"Angela Zhou"
] |
[
"Causal Inference"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/8139-confounding-robust-policy-improvement
|
http://papers.nips.cc/paper/8139-confounding-robust-policy-improvement.pdf
|
confounding-robust-policy-improvement-1
| null |
[] |
https://paperswithcode.com/paper/computable-variants-of-aixi-which-are-more
|
1805.08592
| null | null |
Computable Variants of AIXI which are More Powerful than AIXItl
|
This paper presents Unlimited Computable AI, or UCAI, that is a family of
computable variants of AIXI. UCAI is more powerful than AIXItl, that is a
conventional family of computable variants of AIXI, in the following ways: 1)
UCAI supports models of terminating computation, including typed lambda
calculus, while AIXItl only supports Turing machine with timeout t, which can
be simulated by typed lambda calculus for any t; 2) unlike UCAI, AIXItl limits
the program length to l.
| null |
http://arxiv.org/abs/1805.08592v3
|
http://arxiv.org/pdf/1805.08592v3.pdf
| null |
[
"Susumu Katayama"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-level-wavelet-cnn-for-image-restoration
|
1805.07071
| null | null |
Multi-level Wavelet-CNN for Image Restoration
|
The tradeoff between receptive field size and efficiency is a crucial issue
in low level vision. Plain convolutional networks (CNNs) generally enlarge the
receptive field at the expense of computational cost. Recently, dilated
filtering has been adopted to address this issue. But it suffers from gridding
effect, and the resulting receptive field is only a sparse sampling of input
image with checkerboard patterns. In this paper, we present a novel multi-level
wavelet CNN (MWCNN) model for better tradeoff between receptive field size and
computational efficiency. With the modified U-Net architecture, wavelet
transform is introduced to reduce the size of feature maps in the contracting
subnetwork. Furthermore, another convolutional layer is further used to
decrease the channels of feature maps. In the expanding subnetwork, inverse
wavelet transform is then deployed to reconstruct the high resolution feature
maps. Our MWCNN can also be explained as the generalization of dilated
filtering and subsampling, and can be applied to many image restoration tasks.
The experimental results clearly show the effectiveness of MWCNN for image
denoising, single image super-resolution, and JPEG image artifacts removal.
|
With the modified U-Net architecture, wavelet transform is introduced to reduce the size of feature maps in the contracting subnetwork.
|
http://arxiv.org/abs/1805.07071v2
|
http://arxiv.org/pdf/1805.07071v2.pdf
| null |
[
"Pengju Liu",
"Hongzhi Zhang",
"Kai Zhang",
"Liang Lin",
"WangMeng Zuo"
] |
[
"Computational Efficiency",
"Denoising",
"Image Denoising",
"Image Restoration",
"Image Super-Resolution",
"JPEG Artifact Correction",
"Super-Resolution"
] | 2018-05-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/why-should-i-trust-interactive-learners
|
1805.08578
| null | null |
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
|
Although interactive learning puts the user into the loop, the learner
remains mostly a black box for the user. Understanding the reasons behind
queries and predictions is important when assessing how the learner works and,
in turn, trust. Consequently, we propose the novel framework of explanatory
interactive learning: in each step, the learner explains its interactive query
to the user, and she queries of any active classifier for visualizing
explanations of the corresponding predictions. We demonstrate that this can
boost the predictive and explanatory powers of and the trust into the learned
model, using text (e.g. SVMs) and image classification (e.g. neural networks)
experiments as well as a user study.
| null |
http://arxiv.org/abs/1805.08578v1
|
http://arxiv.org/pdf/1805.08578v1.pdf
| null |
[
"Stefano Teso",
"Kristian Kersting"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wsnet-compact-and-efficient-networks-through
|
1711.10067
| null | null |
WSNet: Compact and Efficient Networks Through Weight Sampling
|
We present a new approach and a novel architecture, termed WSNet, for
learning compact and efficient deep neural networks. Existing approaches
conventionally learn full model parameters independently and then compress them
via ad hoc processing such as model pruning or filter factorization.
Alternatively, WSNet proposes learning model parameters by sampling from a
compact set of learnable parameters, which naturally enforces {parameter
sharing} throughout the learning process. We demonstrate that such a novel
weight sampling approach (and induced WSNet) promotes both weights and
computation sharing favorably. By employing this method, we can more
efficiently learn much smaller networks with competitive performance compared
to baseline networks with equal numbers of convolution filters. Specifically,
we consider learning compact and efficient 1D convolutional neural networks for
audio classification. Extensive experiments on multiple audio classification
datasets verify the effectiveness of WSNet. Combined with weight quantization,
the resulted models are up to 180 times smaller and theoretically up to 16
times faster than the well-established baselines, without noticeable
performance drop.
| null |
http://arxiv.org/abs/1711.10067v3
|
http://arxiv.org/pdf/1711.10067v3.pdf
|
ICML 2018 7
|
[
"Xiaojie Jin",
"Yingzhen Yang",
"Ning Xu",
"Jianchao Yang",
"Nebojsa Jojic",
"Jiashi Feng",
"Shuicheng Yan"
] |
[
"Audio Classification",
"General Classification",
"Quantization"
] | 2017-11-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2402
|
http://proceedings.mlr.press/v80/jin18d/jin18d.pdf
|
wsnet-compact-and-efficient-networks-through-1
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/on-coresets-for-logistic-regression
|
1805.08571
| null | null |
On Coresets for Logistic Regression
|
Coresets are one of the central methods to facilitate the analysis of large data sets. We continue a recent line of research applying the theory of coresets to logistic regression. First, we show a negative result, namely, that no strongly sublinear sized coresets exist for logistic regression. To deal with intractable worst-case instances we introduce a complexity measure $\mu(X)$, which quantifies the hardness of compressing a data set for logistic regression. $\mu(X)$ has an intuitive statistical interpretation that may be of independent interest. For data sets with bounded $\mu(X)$-complexity, we show that a novel sensitivity sampling scheme produces the first provably sublinear $(1\pm\varepsilon)$-coreset. We illustrate the performance of our method by comparing to uniform sampling as well as to state of the art methods in the area. The experiments are conducted on real world benchmark data for logistic regression.
| null |
https://arxiv.org/abs/1805.08571v3
|
https://arxiv.org/pdf/1805.08571v3.pdf
|
NeurIPS 2018 12
|
[
"Alexander Munteanu",
"Chris Schwiegelshohn",
"Christian Sohler",
"David P. Woodruff"
] |
[
"regression"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7891-on-coresets-for-logistic-regression
|
http://papers.nips.cc/paper/7891-on-coresets-for-logistic-regression.pdf
|
on-coresets-for-logistic-regression-1
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Coresets",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Coresets",
"source_title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach",
"source_url": "http://arxiv.org/abs/1708.00489v4"
}
] |
https://paperswithcode.com/paper/less-is-more-surgical-phase-recognition-with
|
1805.08569
| null | null |
Less is More: Surgical Phase Recognition with Less Annotations through Self-Supervised Pre-training of CNN-LSTM Networks
|
Real-time algorithms for automatically recognizing surgical phases are needed
to develop systems that can provide assistance to surgeons, enable better
management of operating room (OR) resources and consequently improve safety
within the OR. State-of-the-art surgical phase recognition algorithms using
laparoscopic videos are based on fully supervised training. This limits their
potential for widespread application, since creation of manual annotations is
an expensive process considering the numerous types of existing surgeries and
the vast amount of laparoscopic videos available. In this work, we propose a
new self-supervised pre-training approach based on the prediction of remaining
surgery duration (RSD) from laparoscopic videos. The RSD prediction task is
used to pre-train a convolutional neural network (CNN) and long short-term
memory (LSTM) network in an end-to-end manner. Our proposed approach utilizes
all available data and reduces the reliance on annotated data, thereby
facilitating the scaling up of surgical phase recognition algorithms to
different kinds of surgeries. Additionally, we present EndoN2N, an end-to-end
trained CNN-LSTM model for surgical phase recognition and evaluate the
performance of our approach on a dataset of 120 Cholecystectomy laparoscopic
videos (Cholec120). This work also presents the first systematic study of
self-supervised pre-training approaches to understand the amount of annotations
required for surgical phase recognition. Interestingly, the proposed RSD
pre-training approach leads to performance improvement even when all the
training data is manually annotated and outperforms the single pre-training
approach for surgical phase recognition presently published in the literature.
It is also observed that end-to-end training of CNN-LSTM networks boosts
surgical phase recognition performance.
| null |
http://arxiv.org/abs/1805.08569v1
|
http://arxiv.org/pdf/1805.08569v1.pdf
| null |
[
"Gaurav Yengera",
"Didier Mutter",
"Jacques Marescaux",
"Nicolas Padoy"
] |
[
"Management",
"Surgical phase recognition"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/global-navigation-using-predictable-and-slow
|
1805.08565
| null | null |
Global Navigation Using Predictable and Slow Feature Analysis in Multiroom Environments, Path Planning and Other Control Tasks
|
Extended Predictable Feature Analysis (PFAx) [Richthofer and Wiskott, 2017]
is an extension of PFA [Richthofer and Wiskott, 2015] that allows generating a
goal-directed control signal of an agent whose dynamics has previously been
learned during a training phase in an unsupervised manner. PFAx hardly requires
assumptions or prior knowledge of the agent's sensor or control mechanics, or
of the environment. It selects features from a high-dimensional input by
intrinsic predictability and organizes them into a reasonably low-dimensional
model.
While PFA obtains a well predictable model, PFAx yields a model ideally
suited for manipulations with predictable outcome. This allows for
goal-directed manipulation of an agent and thus for local navigation, i.e. for
reaching states where intermediate actions can be chosen by a permanent descent
of distance to the goal. The approach is limited when it comes to global
navigation, e.g. involving obstacles or multiple rooms.
In this article, we extend theoretical results from [Sprekeler and Wiskott,
2008], enabling PFAx to perform stable global navigation. So far, the most
widely exploited characteristic of Slow Feature Analysis (SFA) was that
slowness yields invariances. We focus on another fundamental characteristics of
slow signals: They tend to yield monotonicity and one significant property of
monotonicity is that local optimization is sufficient to find a global optimum.
We present an SFA-based algorithm that structures an environment such that
navigation tasks hierarchically decompose into subgoals. Each of these can be
efficiently achieved by PFAx, yielding an overall global solution of the task.
The algorithm needs to explore and process an environment only once and can
then perform all sorts of navigation tasks efficiently. We support this
algorithm by mathematical theory and apply it to different problems.
| null |
http://arxiv.org/abs/1805.08565v1
|
http://arxiv.org/pdf/1805.08565v1.pdf
| null |
[
"Stefan Richthofer",
"Laurenz Wiskott"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-2d-laser-rangefinder-scans-dataset-of
|
1805.08564
| null | null |
A 2D laser rangefinder scans dataset of standard EUR pallets
|
In the past few years, the technology of automated guided vehicles (AGVs) has
notably advanced. In particular, in the context of factory and warehouse
automation, different approaches have been presented for detecting and
localizing pallets inside warehouses and shop-floor environments. In a related
research paper [1], we show that an AGVs can detect, localize, and track
pallets using machine learning techniques based only on the data of an on-board
2D laser rangefinder. Such sensor is very common in industrial scenarios due to
its simplicity and robustness, but it can only provide a limited amount of
data. Therefore, it has been neglected in the past in favor of more complex
solutions. In this paper, we release to the community the data we collected in
[1] for further research activities in the field of pallet localization and
tracking. The dataset comprises a collection of 565 2D scans from real-world
environments, which are divided into 340 samples where pallets are present, and
225 samples where they are not. The data have been manually labelled and are
provided in different formats.
|
The data have been manually labelled and are provided in different formats.
|
http://arxiv.org/abs/1805.08564v2
|
http://arxiv.org/pdf/1805.08564v2.pdf
| null |
[
"Ihab S. Mohamed",
"Alessio Capitanelli",
"Fulvio Mastrogiovanni",
"Stefano Rovetta",
"Renato Zaccaria"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/best-of-many-worlds-robust-model-selection
|
1805.08562
| null | null |
Best of many worlds: Robust model selection for online supervised learning
|
We introduce algorithms for online, full-information prediction that are
competitive with contextual tree experts of unknown complexity, in both
probabilistic and adversarial settings. We show that by incorporating a
probabilistic framework of structural risk minimization into existing adaptive
algorithms, we can robustly learn not only the presence of stochastic structure
when it exists (leading to constant as opposed to $\mathcal{O}(\sqrt{T})$
regret), but also the correct model order. We thus obtain regret bounds that
are competitive with the regret of an optimal algorithm that possesses strong
side information about both the complexity of the optimal contextual tree
expert and whether the process generating the data is stochastic or
adversarial. These are the first constructive guarantees on simultaneous
adaptivity to the model and the presence of stochasticity.
| null |
http://arxiv.org/abs/1805.08562v1
|
http://arxiv.org/pdf/1805.08562v1.pdf
| null |
[
"Vidya Muthukumar",
"Mitas Ray",
"Anant Sahai",
"Peter L. Bartlett"
] |
[
"Model Selection"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-discourse-aware-attention-model-for
|
1804.05685
| null | null |
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
|
Neural abstractive summarization models have led to promising results in
summarizing relatively short documents. We propose the first model for
abstractive summarization of single, longer-form documents (e.g., research
papers). Our approach consists of a new hierarchical encoder that models the
discourse structure of a document, and an attentive discourse-aware decoder to
generate the summary. Empirical results on two large-scale datasets of
scientific papers show that our model significantly outperforms
state-of-the-art models.
|
Neural abstractive summarization models have led to promising results in summarizing relatively short documents.
|
http://arxiv.org/abs/1804.05685v2
|
http://arxiv.org/pdf/1804.05685v2.pdf
|
NAACL 2018 6
|
[
"Arman Cohan",
"Franck Dernoncourt",
"Doo Soon Kim",
"Trung Bui",
"Seokhwan Kim",
"Walter Chang",
"Nazli Goharian"
] |
[
"Abstractive Text Summarization",
"Decoder",
"Text Summarization",
"Unsupervised Extractive Summarization"
] | 2018-04-16T00:00:00 |
https://aclanthology.org/N18-2097
|
https://aclanthology.org/N18-2097.pdf
|
a-discourse-aware-attention-model-for-1
| null |
[] |
https://paperswithcode.com/paper/a-recurrent-convolutional-neural-network
|
1805.08545
| null | null |
A Recurrent Convolutional Neural Network Approach for Sensorless Force Estimation in Robotic Surgery
|
Providing force feedback as relevant information in current Robot-Assisted
Minimally Invasive Surgery systems constitutes a technological challenge due to
the constraints imposed by the surgical environment. In this context,
Sensorless Force Estimation techniques represent a potential solution, enabling
to sense the interaction forces between the surgical instruments and
soft-tissues. Specifically, if visual feedback is available for observing
soft-tissues' deformation, this feedback can be used to estimate the forces
applied to these tissues. To this end, a force estimation model, based on
Convolutional Neural Networks and Long-Short Term Memory networks, is proposed
in this work. This model is designed to process both, the spatiotemporal
information present in video sequences and the temporal structure of tool data
(the surgical tool-tip trajectory and its grasping status). A series of
analyses are carried out to reveal the advantages of the proposal and the
challenges that remain for real applications. This research work focuses on two
surgical task scenarios, referred to as pushing and pulling tissue. For these
two scenarios, different input data modalities and their effect on the force
estimation quality are investigated. These input data modalities are tool data,
video sequences and a combination of both. The results suggest that the force
estimation quality is better when both, the tool data and video sequences, are
processed by the neural network model. Moreover, this study reveals the need
for a loss function, designed to promote the modeling of smooth and sharp
details found in force signals. Finally, the results show that the modeling of
forces due to pulling tasks is more challenging than for the simplest pushing
actions.
| null |
http://arxiv.org/abs/1805.08545v1
|
http://arxiv.org/pdf/1805.08545v1.pdf
| null |
[
"Arturo Marban",
"Vignesh Srinivasan",
"Wojciech Samek",
"Josep Fernández",
"Alicia Casals"
] |
[] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-motion-deblurring-for-feature-detection
|
1805.08542
| null | null |
Fast Motion Deblurring for Feature Detection and Matching Using Inertial Measurements
|
Many computer vision and image processing applications rely on local
features. It is well-known that motion blur decreases the performance of
traditional feature detectors and descriptors. We propose an inertial-based
deblurring method for improving the robustness of existing feature detectors
and descriptors against the motion blur. Unlike most deblurring algorithms, the
method can handle spatially-variant blur and rolling shutter distortion.
Furthermore, it is capable of running in real-time contrary to state-of-the-art
algorithms. The limitations of inertial-based blur estimation are taken into
account by validating the blur estimates using image data. The evaluation shows
that when the method is used with traditional feature detector and descriptor,
it increases the number of detected keypoints, provides higher repeatability
and improves the localization accuracy. We also demonstrate that such features
will lead to more accurate and complete reconstructions when used in the
application of 3D visual reconstruction.
| null |
http://arxiv.org/abs/1805.08542v1
|
http://arxiv.org/pdf/1805.08542v1.pdf
| null |
[
"Janne Mustaniemi",
"Juho Kannala",
"Simo Särkkä",
"Jiri Matas",
"Janne Heikkilä"
] |
[
"Deblurring"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-object-tracking-the-initialisation
|
1805.01146
| null | null |
Visual Object Tracking: The Initialisation Problem
|
Model initialisation is an important component of object tracking. Tracking
algorithms are generally provided with the first frame of a sequence and a
bounding box (BB) indicating the location of the object. This BB may contain a
large number of background pixels in addition to the object and can lead to
parts-based tracking algorithms initialising their object models in background
regions of the BB. In this paper, we tackle this as a missing labels problem,
marking pixels sufficiently away from the BB as belonging to the background and
learning the labels of the unknown pixels. Three techniques, One-Class SVM
(OC-SVM), Sampled-Based Background Model (SBBM) (a novel background model based
on pixel samples), and Learning Based Digital Matting (LBDM), are adapted to
the problem. These are evaluated with leave-one-video-out cross-validation on
the VOT2016 tracking benchmark. Our evaluation shows both OC-SVMs and SBBM are
capable of providing a good level of segmentation accuracy but are too
parameter-dependent to be used in real-world scenarios. We show that LBDM
achieves significantly increased performance with parameters selected by cross
validation and we show that it is robust to parameter variation.
|
This BB may contain a large number of background pixels in addition to the object and can lead to parts-based tracking algorithms initialising their object models in background regions of the BB.
|
http://arxiv.org/abs/1805.01146v2
|
http://arxiv.org/pdf/1805.01146v2.pdf
| null |
[
"George De Ath",
"Richard Everson"
] |
[
"Image Matting",
"Missing Labels",
"Object",
"Object Tracking",
"Visual Object Tracking"
] | 2018-05-03T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.