paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/implicit-weight-uncertainty-in-neural
|
1711.01297
| null | null |
Implicit Weight Uncertainty in Neural Networks
|
Modern neural networks tend to be overconfident on unseen, noisy or
incorrectly labelled data and do not produce meaningful uncertainty measures.
Bayesian deep learning aims to address this shortcoming with variational
approximations (such as Bayes by Backprop or Multiplicative Normalising Flows).
However, current approaches have limitations regarding flexibility and
scalability. We introduce Bayes by Hypernet (BbH), a new method of variational
approximation that interprets hypernetworks as implicit distributions. It
naturally uses neural networks to model arbitrarily complex distributions and
scales to modern deep learning architectures. In our experiments, we
demonstrate that our method achieves competitive accuracies and predictive
uncertainties on MNIST and a CIFAR5 task, while being the most robust against
adversarial attacks.
|
Modern neural networks tend to be overconfident on unseen, noisy or incorrectly labelled data and do not produce meaningful uncertainty measures.
|
http://arxiv.org/abs/1711.01297v2
|
http://arxiv.org/pdf/1711.01297v2.pdf
| null |
[
"Nick Pawlowski",
"Andrew Brock",
"Matthew C. H. Lee",
"Martin Rajchl",
"Ben Glocker"
] |
[
"Deep Learning",
"Normalising Flows"
] | 2017-11-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/virtual-taobao-virtualizing-real-world-online
|
1805.10000
| null | null |
Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning
|
Applying reinforcement learning in physical-world tasks is extremely
challenging. It is commonly infeasible to sample a large number of trials, as
required by current reinforcement learning methods, in a physical environment.
This paper reports our project on using reinforcement learning for better
commodity search in Taobao, one of the largest online retail platforms and
meanwhile a physical environment with a high sampling cost. Instead of training
reinforcement learning in Taobao directly, we present our approach: first we
build Virtual Taobao, a simulator learned from historical customer behavior
data through the proposed GAN-SD (GAN for Simulating Distributions) and MAIL
(multi-agent adversarial imitation learning), and then we train policies in
Virtual Taobao with no physical costs in which ANC (Action Norm Constraint)
strategy is proposed to reduce over-fitting. In experiments, Virtual Taobao is
trained from hundreds of millions of customers' records, and its properties are
compared with the real environment. The results disclose that Virtual Taobao
faithfully recovers important properties of the real environment. We also show
that the policies trained in Virtual Taobao can have significantly superior
online performance to the traditional supervised approaches. We hope our work
could shed some light on reinforcement learning applications in complex
physical environments.
|
Applying reinforcement learning in physical-world tasks is extremely challenging.
|
http://arxiv.org/abs/1805.10000v1
|
http://arxiv.org/pdf/1805.10000v1.pdf
| null |
[
"Jing-Cheng Shi",
"Yang Yu",
"Qing Da",
"Shi-Yong Chen",
"An-Xiang Zeng"
] |
[
"Imitation Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributed-optimization-strategy-for-multi
|
1806.06062
| null | null |
Distributed Optimization Strategy for Multi Area Economic Dispatch Based on Electro Search Optimization Algorithm
|
A new adopted evolutionary algorithm is presented in this paper to solve the
non-smooth, non-convex and non-linear multi-area economic dispatch (MAED). MAED
includes some areas which contains its own power generation and loads. By
transmitting the power from the area with lower cost to the area with higher
cost, the total cost function can be minimized greatly. The tie line capacity,
multi-fuel generator and the prohibited operating zones are satisfied in this
study. In addition, a new algorithm based on electro search optimization
algorithm (ESOA) is proposed to solve the MAED optimization problem with
considering all the constraints. In ESOA algorithm all probable moving states
for individuals to get away from or move towards the worst or best solution
needs to be considered. To evaluate the performance of the ESOA algorithm, the
algorithm is applied to both the original economic dispatch with 40 generator
systems and the multi-area economic dispatch with 3 different systems such as:
6 generators in 2 areas; and 40 generators in 4 areas. It can be concluded
that, ESOA algorithm is more accurate and robust in comparison with other
methods.
| null |
http://arxiv.org/abs/1806.06062v1
|
http://arxiv.org/pdf/1806.06062v1.pdf
| null |
[
"Mina Yazdandoost",
"Peyman Khazaei",
"Salar Saadatian",
"Rahim Kamali"
] |
[
"Distributed Optimization"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/persistence-fisher-kernel-a-riemannian
|
1802.03569
| null | null |
Persistence Fisher Kernel: A Riemannian Manifold Kernel for Persistence Diagrams
|
Algebraic topology methods have recently played an important role for
statistical analysis with complicated geometric structured data such as shapes,
linked twist maps, and material data. Among them, \textit{persistent homology}
is a well-known tool to extract robust topological features, and outputs as
\textit{persistence diagrams} (PDs). However, PDs are point multi-sets which
can not be used in machine learning algorithms for vector data. To deal with
it, an emerged approach is to use kernel methods, and an appropriate geometry
for PDs is an important factor to measure the similarity of PDs. A popular
geometry for PDs is the \textit{Wasserstein metric}. However, Wasserstein
distance is not \textit{negative definite}. Thus, it is limited to build
positive definite kernels upon the Wasserstein distance \textit{without
approximation}. In this work, we rely upon the alternative \textit{Fisher
information geometry} to propose a positive definite kernel for PDs
\textit{without approximation}, namely the Persistence Fisher (PF) kernel.
Then, we analyze eigensystem of the integral operator induced by the proposed
kernel for kernel machines. Based on that, we derive generalization error
bounds via covering numbers and Rademacher averages for kernel machines with
the PF kernel. Additionally, we show some nice properties such as stability and
infinite divisibility for the proposed kernel. Furthermore, we also propose a
linear time complexity over the number of points in PDs for an approximation of
our proposed kernel with a bounded error. Throughout experiments with many
different tasks on various benchmark datasets, we illustrate that the PF kernel
compares favorably with other baseline kernels for PDs.
|
To deal with it, an emerged approach is to use kernel methods, and an appropriate geometry for PDs is an important factor to measure the similarity of PDs.
|
http://arxiv.org/abs/1802.03569v5
|
http://arxiv.org/pdf/1802.03569v5.pdf
|
NeurIPS 2018 12
|
[
"Tam Le",
"Makoto Yamada"
] |
[] | 2018-02-10T00:00:00 |
http://papers.nips.cc/paper/8205-persistence-fisher-kernel-a-riemannian-manifold-kernel-for-persistence-diagrams
|
http://papers.nips.cc/paper/8205-persistence-fisher-kernel-a-riemannian-manifold-kernel-for-persistence-diagrams.pdf
|
persistence-fisher-kernel-a-riemannian-1
| null |
[
{
"code_snippet_url": "https://github.com/paultsw/nice_pytorch/blob/15cfc543fc3dc81ee70398b8dfc37b67269ede95/nice/layers.py#L109",
"description": "**Affine Coupling** is a method for implementing a normalizing flow (where we stack a sequence of invertible bijective transformation functions). Affine coupling is one of these bijective transformation functions. Specifically, it is an example of a reversible transformation where the forward function, the reverse function and the log-determinant are computationally efficient. For the forward function, we split the input dimension into two parts:\r\n\r\n$$ \\mathbf{x}\\_{a}, \\mathbf{x}\\_{b} = \\text{split}\\left(\\mathbf{x}\\right) $$\r\n\r\nThe second part stays the same $\\mathbf{x}\\_{b} = \\mathbf{y}\\_{b}$, while the first part $\\mathbf{x}\\_{a}$ undergoes an affine transformation, where the parameters for this transformation are learnt using the second part $\\mathbf{x}\\_{b}$ being put through a neural network. Together we have:\r\n\r\n$$ \\left(\\log{\\mathbf{s}, \\mathbf{t}}\\right) = \\text{NN}\\left(\\mathbf{x}\\_{b}\\right) $$\r\n\r\n$$ \\mathbf{s} = \\exp\\left(\\log{\\mathbf{s}}\\right) $$\r\n\r\n$$ \\mathbf{y}\\_{a} = \\mathbf{s} \\odot \\mathbf{x}\\_{a} + \\mathbf{t} $$\r\n\r\n$$ \\mathbf{y}\\_{b} = \\mathbf{x}\\_{b} $$\r\n\r\n$$ \\mathbf{y} = \\text{concat}\\left(\\mathbf{y}\\_{a}, \\mathbf{y}\\_{b}\\right) $$\r\n\r\nImage: [GLOW](https://paperswithcode.com/method/glow)",
"full_name": "Affine Coupling",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Bijective Transformations** are transformations that are bijective, i.e. they can be reversed. They are used within the context of normalizing flow models. Below you can find a continuously updating list of bijective transformation methods.",
"name": "Bijective Transformation",
"parent": null
},
"name": "Affine Coupling",
"source_title": "NICE: Non-linear Independent Components Estimation",
"source_url": "http://arxiv.org/abs/1410.8516v6"
},
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
}
] |
https://paperswithcode.com/paper/graph2seq-graph-to-sequence-learning-with
|
1804.00823
| null |
SkeXehR9t7
|
Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks
|
The celebrated Sequence to Sequence learning (Seq2Seq) technique and its
numerous variants achieve excellent performance on many tasks. However, many
machine learning tasks have inputs naturally represented as graphs; existing
Seq2Seq models face a significant challenge in achieving accurate conversion
from graph form to the appropriate sequence. To address this challenge, we
introduce a novel general end-to-end graph-to-sequence neural encoder-decoder
model that maps an input graph to a sequence of vectors and uses an
attention-based LSTM method to decode the target sequence from these vectors.
Our method first generates the node and graph embeddings using an improved
graph-based neural network with a novel aggregation strategy to incorporate
edge direction information in the node embeddings. We further introduce an
attention mechanism that aligns node embeddings and the decoding sequence to
better cope with large graphs. Experimental results on bAbI, Shortest Path, and
Natural Language Generation tasks demonstrate that our model achieves
state-of-the-art performance and significantly outperforms existing graph
neural networks, Seq2Seq, and Tree2Seq models; using the proposed
bi-directional node embedding aggregation strategy, the model can converge
rapidly to the optimal performance.
|
Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.
|
http://arxiv.org/abs/1804.00823v4
|
http://arxiv.org/pdf/1804.00823v4.pdf
|
ICLR 2019 5
|
[
"Kun Xu",
"Lingfei Wu",
"Zhiguo Wang",
"Yansong Feng",
"Michael Witbrock",
"Vadim Sheinin"
] |
[
"Decoder",
"Graph-to-Sequence",
"SQL-to-Text",
"Text Generation"
] | 2018-04-03T00:00:00 |
https://openreview.net/forum?id=SkeXehR9t7
|
https://openreview.net/pdf?id=SkeXehR9t7
|
graph2seq-graph-to-sequence-learning-with-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence\r\nfrom that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.\r\n\r\n(Note that this page refers to the original seq2seq not general sequence-to-sequence models)",
"full_name": "Sequence to Sequence",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Sequence To Sequence Models",
"parent": null
},
"name": "Seq2Seq",
"source_title": "Sequence to Sequence Learning with Neural Networks",
"source_url": "http://arxiv.org/abs/1409.3215v3"
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/lifelong-domain-word-embedding-via-meta
|
1805.09991
| null | null |
Lifelong Domain Word Embedding via Meta-Learning
|
Learning high-quality domain word embeddings is important for achieving good
performance in many NLP tasks. General-purpose embeddings trained on
large-scale corpora are often sub-optimal for domain-specific applications.
However, domain-specific tasks often do not have large in-domain corpora for
training high-quality domain embeddings. In this paper, we propose a novel
lifelong learning setting for domain embedding. That is, when performing the
new domain embedding, the system has seen many past domains, and it tries to
expand the new in-domain corpus by exploiting the corpora from the past domains
via meta-learning. The proposed meta-learner characterizes the similarities of
the contexts of the same word in many domain corpora, which helps retrieve
relevant data from the past domains to expand the new domain corpus.
Experimental results show that domain embeddings produced from such a process
improve the performance of the downstream tasks.
|
Learning high-quality domain word embeddings is important for achieving good performance in many NLP tasks.
|
http://arxiv.org/abs/1805.09991v1
|
http://arxiv.org/pdf/1805.09991v1.pdf
| null |
[
"Hu Xu",
"Bing Liu",
"Lei Shu",
"Philip S. Yu"
] |
[
"Lifelong learning",
"Meta-Learning",
"Word Embeddings"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/d2ke-from-distance-to-kernel-and-embedding
|
1802.04956
| null | null |
D2KE: From Distance to Kernel and Embedding
|
For many machine learning problem settings, particularly with structured
inputs such as sequences or sets of objects, a distance measure between inputs
can be specified more naturally than a feature representation. However, most
standard machine models are designed for inputs with a vector feature
representation. In this work, we consider the estimation of a function
$f:\mathcal{X} \rightarrow \R$ based solely on a dissimilarity measure
$d:\mathcal{X}\times\mathcal{X} \rightarrow \R$ between inputs. In particular,
we propose a general framework to derive a family of \emph{positive definite
kernels} from a given dissimilarity measure, which subsumes the widely-used
\emph{representative-set method} as a special case, and relates to the
well-known \emph{distance substitution kernel} in a limiting case. We show that
functions in the corresponding Reproducing Kernel Hilbert Space (RKHS) are
Lipschitz-continuous w.r.t. the given distance metric. We provide a tractable
algorithm to estimate a function from this RKHS, and show that it enjoys better
generalizability than Nearest-Neighbor estimates. Our approach draws from the
literature of Random Features, but instead of deriving feature maps from an
existing kernel, we construct novel kernels from a random feature map, that we
specify given the distance measure. We conduct classification experiments with
such disparate domains as strings, time series, and sets of vectors, where our
proposed framework compares favorably to existing distance-based learning
methods such as $k$-nearest-neighbors, distance-substitution kernels,
pseudo-Euclidean embedding, and the representative-set method.
| null |
http://arxiv.org/abs/1802.04956v4
|
http://arxiv.org/pdf/1802.04956v4.pdf
| null |
[
"Lingfei Wu",
"Ian En-Hsu Yen",
"Fangli Xu",
"Pradeep Ravikumar",
"Michael Witbrock"
] |
[
"Time Series Analysis"
] | 2018-02-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/beyond-textures-learning-from-multi-domain
|
1805.09987
| null | null |
Learning from Multi-domain Artistic Images for Arbitrary Style Transfer
|
We propose a fast feed-forward network for arbitrary style transfer, which
can generate stylized image for previously unseen content and style image
pairs. Besides the traditional content and style representation based on deep
features and statistics for textures, we use adversarial networks to regularize
the generation of stylized images. Our adversarial network learns the intrinsic
property of image styles from large-scale multi-domain artistic images. The
adversarial training is challenging because both the input and output of our
generator are diverse multi-domain images. We use a conditional generator that
stylized content by shifting the statistics of deep features, and a conditional
discriminator based on the coarse category of styles. Moreover, we propose a
mask module to spatially decide the stylization level and stabilize adversarial
training by avoiding mode collapse. As a side effect, our trained discriminator
can be applied to rank and select representative stylized images. We
qualitatively and quantitatively evaluate the proposed method, and compare with
recent style transfer methods. We release our code and model at
https://github.com/nightldj/behance_release.
|
We propose a fast feed-forward network for arbitrary style transfer, which can generate stylized image for previously unseen content and style image pairs.
|
http://arxiv.org/abs/1805.09987v2
|
http://arxiv.org/pdf/1805.09987v2.pdf
| null |
[
"Zheng Xu",
"Michael Wilber",
"Chen Fang",
"Aaron Hertzmann",
"Hailin Jin"
] |
[
"Style Transfer"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-anytime-predictions-in-neural
|
1708.06832
| null | null |
Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing
|
This work considers the trade-off between accuracy and test-time
computational cost of deep neural networks (DNNs) via \emph{anytime}
predictions from auxiliary predictions. Specifically, we optimize auxiliary
losses jointly in an \emph{adaptive} weighted sum, where the weights are
inversely proportional to average of each loss. Intuitively, this balances the
losses to have the same scale. We demonstrate theoretical considerations that
motivate this approach from multiple viewpoints, including connecting it to
optimizing the geometric mean of the expectation of each loss, an objective
that ignores the scale of losses. Experimentally, the adaptive weights induce
more competitive anytime predictions on multiple recognition data-sets and
models than non-adaptive approaches including weighing all losses equally. In
particular, anytime neural networks (ANNs) can achieve the same accuracy faster
using adaptive weights on a small network than using static constant weights on
a large one. For problems with high performance saturation, we also show a
sequence of exponentially deepening ANNscan achieve near-optimal anytime
results at any budget, at the cost of a const fraction of extra computation.
| null |
http://arxiv.org/abs/1708.06832v3
|
http://arxiv.org/pdf/1708.06832v3.pdf
| null |
[
"Hanzhang Hu",
"Debadeepta Dey",
"Martial Hebert",
"J. Andrew Bagnell"
] |
[] | 2017-08-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/coordinated-multi-agent-imitation-learning
|
1703.03121
| null | null |
Coordinated Multi-Agent Imitation Learning
|
We study the problem of imitation learning from demonstrations of multiple
coordinating agents. One key challenge in this setting is that learning a good
model of coordination can be difficult, since coordination is often implicit in
the demonstrations and must be inferred as a latent variable. We propose a
joint approach that simultaneously learns a latent coordination model along
with the individual policies. In particular, our method integrates unsupervised
structure learning with conventional imitation learning. We illustrate the
power of our approach on a difficult problem of learning multiple policies for
fine-grained behavior modeling in team sports, where different players occupy
different roles in the coordinated team strategy. We show that having a
coordination model to infer the roles of players yields substantially improved
imitation loss compared to conventional baselines.
| null |
http://arxiv.org/abs/1703.03121v2
|
http://arxiv.org/pdf/1703.03121v2.pdf
|
ICML 2017 8
|
[
"Hoang M. Le",
"Yisong Yue",
"Peter Carr",
"Patrick Lucey"
] |
[
"Imitation Learning"
] | 2017-03-09T00:00:00 |
https://icml.cc/Conferences/2017/Schedule?showEvent=621
|
http://proceedings.mlr.press/v70/le17a/le17a.pdf
|
coordinated-multi-agent-imitation-learning-1
| null |
[] |
https://paperswithcode.com/paper/deep-graph-translation
|
1805.09980
| null |
SJz6MnC5YQ
|
Deep Graph Translation
|
Inspired by the tremendous success of deep generative models on generating
continuous data like image and audio, in the most recent year, few deep graph
generative models have been proposed to generate discrete data such as graphs.
They are typically unconditioned generative models which has no control on
modes of the graphs being generated. Differently, in this paper, we are
interested in a new problem named \emph{Deep Graph Translation}: given an input
graph, we want to infer a target graph based on their underlying (both global
and local) translation mapping. Graph translation could be highly desirable in
many applications such as disaster management and rare event forecasting, where
the rare and abnormal graph patterns (e.g., traffic congestions and terrorism
events) will be inferred prior to their occurrence even without historical data
on the abnormal patterns for this graph (e.g., a road network or human contact
network). To achieve this, we propose a novel Graph-Translation-Generative
Adversarial Networks (GT-GAN) which will generate a graph translator from input
to target graphs. GT-GAN consists of a graph translator where we propose new
graph convolution and deconvolution layers to learn the global and local
translation mapping. A new conditional graph discriminator has also been
proposed to classify target graphs by conditioning on input graphs. Extensive
experiments on multiple synthetic and real-world datasets demonstrate the
effectiveness and scalability of the proposed GT-GAN.
|
To achieve this, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which will generate a graph translator from input to target graphs.
|
http://arxiv.org/abs/1805.09980v2
|
http://arxiv.org/pdf/1805.09980v2.pdf
| null |
[
"Xiaojie Guo",
"Lingfei Wu",
"Liang Zhao"
] |
[
"Management",
"Translation"
] | 2018-05-25T00:00:00 |
https://openreview.net/forum?id=SJz6MnC5YQ
|
https://openreview.net/pdf?id=SJz6MnC5YQ
| null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/scalable-spectral-clustering-using-random
|
1805.11048
| null | null |
Scalable Spectral Clustering Using Random Binning Features
|
Spectral clustering is one of the most effective clustering approaches that capture hidden cluster structures in the data. However, it does not scale well to large-scale problems due to its quadratic complexity in constructing similarity graphs and computing subsequent eigendecomposition. Although a number of methods have been proposed to accelerate spectral clustering, most of them compromise considerable information loss in the original data for reducing computational bottlenecks. In this paper, we present a novel scalable spectral clustering method using Random Binning features (RB) to simultaneously accelerate both similarity graph construction and the eigendecomposition. Specifically, we implicitly approximate the graph similarity (kernel) matrix by the inner product of a large sparse feature matrix generated by RB. Then we introduce a state-of-the-art SVD solver to effectively compute eigenvectors of this large matrix for spectral clustering. Using these two building blocks, we reduce the computational cost from quadratic to linear in the number of data points while achieving similar accuracy. Our theoretical analysis shows that spectral clustering via RB converges faster to the exact spectral clustering than the standard Random Feature approximation. Extensive experiments on 8 benchmarks show that the proposed method either outperforms or matches the state-of-the-art methods in both accuracy and runtime. Moreover, our method exhibits linear scalability in both the number of data samples and the number of RB features.
|
Moreover, our method exhibits linear scalability in both the number of data samples and the number of RB features.
|
https://arxiv.org/abs/1805.11048v3
|
https://arxiv.org/pdf/1805.11048v3.pdf
| null |
[
"Lingfei Wu",
"Pin-Yu Chen",
"Ian En-Hsu Yen",
"Fangli Xu",
"Yinglong Xia",
"Charu Aggarwal"
] |
[
"Clustering",
"graph construction",
"Graph Similarity",
"Image/Document Clustering"
] | 2018-05-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Spectral clustering has attracted increasing attention due to\r\nthe promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus,",
"full_name": "Spectral Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Spectral Clustering",
"source_title": "A Tutorial on Spectral Clustering",
"source_url": "http://arxiv.org/abs/0711.0189v1"
}
] |
https://paperswithcode.com/paper/sosa-a-lightweight-ontology-for-sensors
|
1805.09979
| null | null |
SOSA: A Lightweight Ontology for Sensors, Observations, Samples, and Actuators
|
The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a
formal but lightweight general-purpose specification for modeling the
interaction between the entities involved in the acts of observation,
actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic
Sensor Network (SSN) ontology based on changes in scope and target audience,
technical developments, and lessons learned over the past years. SOSA also acts
as a replacement of SSN's Stimulus Sensor Observation (SSO) core. It has been
developed by the first joint working group of the Open Geospatial Consortium
(OGC) and the World Wide Web Consortium (W3C) on \emph{Spatial Data on the
Web}. In this work, we motivate the need for SOSA, provide an overview of the
main classes and properties, and briefly discuss its integration with the new
release of the SSN ontology as well as various other alignments to
specifications such as OGC's Observations and Measurements (O\&M),
Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon
common modeling problems and application areas related to publishing and
searching observation, sampling, and actuation data on the Web. The SOSA
ontology and standard can be accessed at
\url{https://www.w3.org/TR/vocab-ssn/}.
| null |
http://arxiv.org/abs/1805.09979v2
|
http://arxiv.org/pdf/1805.09979v2.pdf
| null |
[
"Krzysztof Janowicz",
"Armin Haller",
"Simon J D Cox",
"Danh Le Phuoc",
"Maxime Lefrancois"
] |
[] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributed-cartesian-power-graph
|
1805.09978
| null | null |
Distributed Cartesian Power Graph Segmentation for Graphon Estimation
|
We study an extention of total variation denoising over images to over
Cartesian power graphs and its applications to estimating non-parametric
network models. The power graph fused lasso (PGFL) segments a matrix by
exploiting a known graphical structure, $G$, over the rows and columns. Our
main results shows that for any connected graph, under subGaussian noise, the
PGFL achieves the same mean-square error rate as 2D total variation denoising
for signals of bounded variation. We study the use of the PGFL for denoising an
observed network $H$, where we learn the graph $G$ as the $K$-nearest
neighborhood graph of an estimated metric over the vertices. We provide
theoretical and empirical results for estimating graphons, a non-parametric
exchangeable network model, and compare to the state of the art graphon
estimation methods.
| null |
http://arxiv.org/abs/1805.09978v1
|
http://arxiv.org/pdf/1805.09978v1.pdf
| null |
[
"Shitong Wei",
"Oscar Hernan Madrid-Padilla",
"James Sharpnack"
] |
[
"Denoising",
"Graphon Estimation"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visceral-machines-reinforcement-learning-with
|
1805.09975
| null | null |
Visceral Machines: Risk-Aversion in Reinforcement Learning with Intrinsic Physiological Rewards
|
As people learn to navigate the world, autonomic nervous system (e.g., "fight
or flight") responses provide intrinsic feedback about the potential
consequence of action choices (e.g., becoming nervous when close to a cliff
edge or driving fast around a bend.) Physiological changes are correlated with
these biological preparations to protect one-self from danger. We present a
novel approach to reinforcement learning that leverages a task-independent
intrinsic reward function trained on peripheral pulse measurements that are
correlated with human autonomic nervous system responses. Our hypothesis is
that such reward functions can circumvent the challenges associated with sparse
and skewed rewards in reinforcement learning settings and can help improve
sample efficiency. We test this in a simulated driving environment and show
that it can increase the speed of learning and reduce the number of collisions
during the learning stage.
|
As people learn to navigate the world, autonomic nervous system (e. g., "fight or flight") responses provide intrinsic feedback about the potential consequence of action choices (e. g., becoming nervous when close to a cliff edge or driving fast around a bend.)
|
http://arxiv.org/abs/1805.09975v2
|
http://arxiv.org/pdf/1805.09975v2.pdf
| null |
[
"Daniel McDuff",
"Ashish Kapoor"
] |
[
"Navigate",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/part-based-visual-tracking-via-structural
|
1805.09971
| null | null |
Part-based Visual Tracking via Structural Support Correlation Filter
|
Recently, part-based and support vector machines (SVM) based trackers have
shown favorable performance. Nonetheless, the time-consuming online training
and updating process limit their real-time applications. In order to better
deal with the partial occlusion issue and improve their efficiency, we propose
a novel part-based structural support correlation filter tracking method, which
absorbs the strong discriminative ability from SVM and the excellent property
of part-based tracking methods which is less sensitive to partial occlusion.
Then, our proposed model can learn the support correlation filter of each part
jointly by a star structure model, which preserves the spatial layout structure
among parts and tolerates outliers of parts. In addition, to mitigate the issue
of drift away from object further, we introduce inter-frame consistencies of
local parts into our model. Finally, in our model, we accurately estimate the
scale changes of object by the relative distance change among reliable parts.
The extensive empirical evaluations on three benchmark datasets: OTB2015,
TempleColor128 and VOT2015 demonstrate that the proposed method performs
superiorly against several state-of-the-art trackers in terms of tracking
accuracy, speed and robustness.
|
In addition, to mitigate the issue of drift away from object further, we introduce inter-frame consistencies of local parts into our model.
|
http://arxiv.org/abs/1805.09971v1
|
http://arxiv.org/pdf/1805.09971v1.pdf
| null |
[
"Zhangjian Ji",
"Kai Feng",
"Yuhua Qian"
] |
[
"Visual Tracking"
] | 2018-05-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/towards-more-efficient-stochastic
|
1805.09969
| null | null |
Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
|
Recently, the decentralized optimization problem is attracting growing
attention. Most existing methods are deterministic with high per-iteration cost
and have a convergence rate quadratically depending on the problem condition
number. Besides, the dense communication is necessary to ensure the convergence
even if the dataset is sparse. In this paper, we generalize the decentralized
optimization problem to a monotone operator root finding problem, and propose a
stochastic algorithm named DSBA that (i) converges geometrically with a rate
linearly depending on the problem condition number, and (ii) can be implemented
using sparse communication only. Additionally, DSBA handles learning problems
like AUC-maximization which cannot be tackled efficiently in the decentralized
setting. Experiments on convex minimization and AUC-maximization validate the
efficiency of our method.
| null |
http://arxiv.org/abs/1805.09969v1
|
http://arxiv.org/pdf/1805.09969v1.pdf
|
ICML 2018 7
|
[
"Zebang Shen",
"Aryan Mokhtari",
"Tengfei Zhou",
"Peilin Zhao",
"Hui Qian"
] |
[] | 2018-05-25T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2229
|
http://proceedings.mlr.press/v80/shen18a/shen18a.pdf
|
towards-more-efficient-stochastic-1
| null |
[] |
https://paperswithcode.com/paper/cooking-state-recognition-from-images-using
|
1805.09967
| null | null |
Cooking State Recognition from Images Using Inception Architecture
|
A kitchen robot properly needs to understand the cooking environment to
continue any cooking activities. But object's state detection has not been
researched well so far as like object detection. In this paper, we propose a
deep learning approach to identify different cooking states from images for a
kitchen robot. In our research, we investigate particularly the performance of
Inception architecture and propose a modified architecture based on Inception
model to classify different cooking states. The model is analyzed robustly in
terms of different layers, and optimizers. Experimental results on a cooking
datasets demonstrate that proposed model can be a potential solution to the
cooking state recognition problem.
| null |
http://arxiv.org/abs/1805.09967v2
|
http://arxiv.org/pdf/1805.09967v2.pdf
| null |
[
"Md Sirajus Salekin",
"Ahmad Babaeian Jelodar",
"Rafsanjany Kushol"
] |
[
"object-detection",
"Object Detection"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/myopic-bayesian-design-of-experiments-via
|
1805.09964
| null | null |
Myopic Bayesian Design of Experiments via Posterior Sampling and Probabilistic Programming
|
We design a new myopic strategy for a wide class of sequential design of
experiment (DOE) problems, where the goal is to collect data in order to to
fulfil a certain problem specific goal. Our approach, Myopic Posterior Sampling
(MPS), is inspired by the classical posterior (Thompson) sampling algorithm for
multi-armed bandits and leverages the flexibility of probabilistic programming
and approximate Bayesian inference to address a broad set of problems.
Empirically, this general-purpose strategy is competitive with more specialised
methods in a wide array of DOE tasks, and more importantly, enables addressing
complex DOE goals where no existing method seems applicable. On the theoretical
side, we leverage ideas from adaptive submodularity and reinforcement learning
to derive conditions under which MPS achieves sublinear regret against natural
benchmark policies.
|
We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal.
|
http://arxiv.org/abs/1805.09964v1
|
http://arxiv.org/pdf/1805.09964v1.pdf
| null |
[
"Kirthevasan Kandasamy",
"Willie Neiswanger",
"Reed Zhang",
"Akshay Krishnamurthy",
"Jeff Schneider",
"Barnabas Poczos"
] |
[
"Bayesian Inference",
"Multi-Armed Bandits",
"Probabilistic Programming",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Thompson Sampling"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/phrase-table-as-recommendation-memory-for
|
1805.09960
| null | null |
Phrase Table as Recommendation Memory for Neural Machine Translation
|
Neural Machine Translation (NMT) has drawn much attention due to its
promising translation performance recently. However, several studies indicate
that NMT often generates fluent but unfaithful translations. In this paper, we
propose a method to alleviate this problem by using a phrase table as
recommendation memory. The main idea is to add bonus to words worthy of
recommendation, so that NMT can make correct predictions. Specifically, we
first derive a prefix tree to accommodate all the candidate target phrases by
searching the phrase translation table according to the source sentence. Then,
we construct a recommendation word set by matching between candidate target
phrases and previously translated target words by NMT. After that, we determine
the specific bonus value for each recommendable word by using the attention
vector and phrase translation probability. Finally, we integrate this bonus
value into NMT to improve the translation results. The extensive experiments
demonstrate that the proposed methods obtain remarkable improvements over the
strong attentionbased NMT.
| null |
http://arxiv.org/abs/1805.09960v1
|
http://arxiv.org/pdf/1805.09960v1.pdf
| null |
[
"Yang Zhao",
"Yining Wang",
"Jiajun Zhang",
"Cheng-qing Zong"
] |
[
"Machine Translation",
"NMT",
"Sentence",
"Translation"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-sentiment-analysis-of-breast-cancer
|
1805.09959
| null | null |
A Sentiment Analysis of Breast Cancer Treatment Experiences and Healthcare Perceptions Across Twitter
|
Background: Social media has the capacity to afford the healthcare industry
with valuable feedback from patients who reveal and express their medical
decision-making process, as well as self-reported quality of life indicators
both during and post treatment. In prior work, [Crannell et. al.], we have
studied an active cancer patient population on Twitter and compiled a set of
tweets describing their experience with this disease. We refer to these online
public testimonies as "Invisible Patient Reported Outcomes" (iPROs), because
they carry relevant indicators, yet are difficult to capture by conventional
means of self-report. Methods: Our present study aims to identify tweets
related to the patient experience as an additional informative tool for
monitoring public health. Using Twitter's public streaming API, we compiled
over 5.3 million "breast cancer" related tweets spanning September 2016 until
mid December 2017. We combined supervised machine learning methods with natural
language processing to sift tweets relevant to breast cancer patient
experiences. We analyzed a sample of 845 breast cancer patient and survivor
accounts, responsible for over 48,000 posts. We investigated tweet content with
a hedonometric sentiment analysis to quantitatively extract emotionally charged
topics. Results: We found that positive experiences were shared regarding
patient treatment, raising support, and spreading awareness. Further
discussions related to healthcare were prevalent and largely negative focusing
on fear of political legislation that could result in loss of coverage.
Conclusions: Social media can provide a positive outlet for patients to discuss
their needs and concerns regarding their healthcare coverage and treatment
needs. Capturing iPROs from online communication can help inform healthcare
professionals and lead to more connected and personalized treatment regimens.
| null |
http://arxiv.org/abs/1805.09959v2
|
http://arxiv.org/pdf/1805.09959v2.pdf
| null |
[
"Eric M. Clark",
"Ted James",
"Chris A. Jones",
"Amulya Alapati",
"Promise Ukandu",
"Christopher M. Danforth",
"Peter Sheridan Dodds"
] |
[
"Decision Making",
"Sentiment Analysis"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-functional-dictionaries-learning
|
1805.09957
| null | null |
Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions
|
Various 3D semantic attributes such as segmentation masks, geometric
features, keypoints, and materials can be encoded as per-point probe functions
on 3D geometries. Given a collection of related 3D shapes, we consider how to
jointly analyze such probe functions over different shapes, and how to discover
common latent structures using a neural network --- even in the absence of any
correspondence information. Our network is trained on point cloud
representations of shape geometry and associated semantic functions on that
point cloud. These functions express a shared semantic understanding of the
shapes but are not coordinated in any way. For example, in a segmentation task,
the functions can be indicator functions of arbitrary sets of shape parts, with
the particular combination involved not known to the network. Our network is
able to produce a small dictionary of basis functions for each shape, a
dictionary whose span includes the semantic functions provided for that shape.
Even though our shapes have independent discretizations and no functional
correspondences are provided, the network is able to generate latent bases, in
a consistent order, that reflect the shared semantic structure among the
shapes. We demonstrate the effectiveness of our technique in various
segmentation and keypoint selection applications.
|
Even though our shapes have independent discretizations and no functional correspondences are provided, the network is able to generate latent bases, in a consistent order, that reflect the shared semantic structure among the shapes.
|
http://arxiv.org/abs/1805.09957v3
|
http://arxiv.org/pdf/1805.09957v3.pdf
|
NeurIPS 2018 12
|
[
"Minhyuk Sung",
"Hao Su",
"Ronald Yu",
"Leonidas Guibas"
] |
[
"Segmentation"
] | 2018-05-25T00:00:00 |
http://papers.nips.cc/paper/7330-deep-functional-dictionaries-learning-consistent-semantic-structures-on-3d-models-from-functions
|
http://papers.nips.cc/paper/7330-deep-functional-dictionaries-learning-consistent-semantic-structures-on-3d-models-from-functions.pdf
|
deep-functional-dictionaries-learning-1
| null |
[] |
https://paperswithcode.com/paper/towards-understanding-limitations-of-pixel
|
1805.07816
| null | null |
Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks
|
Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them. Preprocessing defense methods such as pixel discretization are particularly attractive in practice due to their simplicity, low computational overhead, and applicability to various systems. It is observed that such methods work well on simple datasets like MNIST, but break on more complicated ones like ImageNet under recently proposed strong white-box attacks. To understand the conditions for success and potentials for improvement, we study the pixel discretization defense method, including more sophisticated variants that take into account the properties of the dataset being discretized. Our results again show poor resistance against the strong attacks. We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets. Furthermore, our arguments present insights why some other preprocessing defenses may be insecure.
|
We analyze our results in a theoretical framework and offer strong evidence that pixel discretization is unlikely to work on all but the simplest of the datasets.
|
https://arxiv.org/abs/1805.07816v5
|
https://arxiv.org/pdf/1805.07816v5.pdf
| null |
[
"Jiefeng Chen",
"Xi Wu",
"Vaibhav Rastogi",
"YIngyu Liang",
"Somesh Jha"
] |
[] | 2018-05-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/letting-emotions-flow-success-prediction-by
|
1805.09746
| null | null |
Letting Emotions Flow: Success Prediction by Modeling the Flow of Emotions in Books
|
Books have the power to make us feel happiness, sadness, pain, surprise, or
sorrow. An author's dexterity in the use of these emotions captivates readers
and makes it difficult for them to put the book down. In this paper, we model
the flow of emotions over a book using recurrent neural networks and quantify
its usefulness in predicting success in books. We obtained the best weighted
F1-score of 69% for predicting books' success in a multitask setting
(simultaneously predicting success and genre of books).
|
Books have the power to make us feel happiness, sadness, pain, surprise, or sorrow.
|
http://arxiv.org/abs/1805.09746v2
|
http://arxiv.org/pdf/1805.09746v2.pdf
|
NAACL 2018 6
|
[
"Suraj Maharjan",
"Sudipta Kar",
"Manuel Montes-y-Gomez",
"Fabio A. Gonzalez",
"Thamar Solorio"
] |
[] | 2018-05-24T00:00:00 |
https://aclanthology.org/N18-2042
|
https://aclanthology.org/N18-2042.pdf
|
letting-emotions-flow-success-prediction-by-1
| null |
[] |
https://paperswithcode.com/paper/deep-visual-domain-adaptation-a-survey
|
1802.03601
| null | null |
Deep Visual Domain Adaptation: A Survey
|
Deep domain adaption has emerged as a new learning technique to address the
lack of massive amounts of labeled data. Compared to conventional methods,
which learn shared feature subspaces or reuse important source instances with
shallow representations, deep domain adaption methods leverage deep networks to
learn more transferable representations by embedding domain adaptation in the
pipeline of deep learning. There have been comprehensive surveys for shallow
domain adaption, but few timely reviews the emerging deep learning based
methods. In this paper, we provide a comprehensive survey of deep domain
adaptation methods for computer vision applications with four major
contributions. First, we present a taxonomy of different deep domain adaption
scenarios according to the properties of data that define how two domains are
diverged. Second, we summarize deep domain adaption approaches into several
categories based on training loss, and analyze and compare briefly the
state-of-the-art methods under these categories. Third, we overview the
computer vision applications that go beyond image classification, such as face
recognition, semantic segmentation and object detection. Fourth, some potential
deficiencies of current methods and several future directions are highlighted.
| null |
http://arxiv.org/abs/1802.03601v4
|
http://arxiv.org/pdf/1802.03601v4.pdf
| null |
[
"Mei Wang",
"Weihong Deng"
] |
[
"Domain Adaptation",
"Face Recognition",
"image-classification",
"Image Classification",
"object-detection",
"Object Detection",
"Semantic Segmentation",
"Survey"
] | 2018-02-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-data-driven-approach-for-autonomous-motion
|
1805.09951
| null | null |
A Data-Driven Approach for Autonomous Motion Planning and Control in Off-Road Driving Scenarios
|
This paper presents a novel data-driven approach to vehicle motion planning
and control in off-road driving scenarios. For autonomous off-road driving,
environmental conditions impact terrain traversability as a function of
weather, surface composition, and slope. Geographical information system (GIS)
and National Centers for Environmental Information datasets are processed to
provide this information for interactive planning and control system elements.
A top-level global route planner (GRP) defines optimal waypoints using dynamic
programming (DP). A local path planner (LPP) computes a desired trajectory
between waypoints such that infeasible control states and collisions with
obstacles are avoided. The LPP also updates the GRP with real-time sensing and
control data. A low-level feedback controller applies feedback linearization to
asymptotically track the specified LPP trajectory. Autonomous driving
simulation results are presented for traversal of terrains in Oregon and
Indiana case studies.
| null |
http://arxiv.org/abs/1805.09951v1
|
http://arxiv.org/pdf/1805.09951v1.pdf
| null |
[
"Hossein Rastgoftar",
"Bingxin Zhang",
"Ella M. Atkins"
] |
[
"Autonomous Driving",
"Motion Planning"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/topological-data-analysis-of-decision
|
1805.09949
| null | null |
Topological Data Analysis of Decision Boundaries with Application to Model Selection
|
We propose the labeled \v{C}ech complex, the plain labeled Vietoris-Rips
complex, and the locally scaled labeled Vietoris-Rips complex to perform
persistent homology inference of decision boundaries in classification tasks.
We provide theoretical conditions and analysis for recovering the homology of a
decision boundary from samples. Our main objective is quantification of deep
neural network complexity to enable matching of datasets to pre-trained models;
we report results for experiments using MNIST, FashionMNIST, and CIFAR10.
|
We propose the labeled \v{C}ech complex, the plain labeled Vietoris-Rips complex, and the locally scaled labeled Vietoris-Rips complex to perform persistent homology inference of decision boundaries in classification tasks.
|
http://arxiv.org/abs/1805.09949v1
|
http://arxiv.org/pdf/1805.09949v1.pdf
| null |
[
"Karthikeyan Natesan Ramamurthy",
"Kush R. Varshney",
"Krishnan Mody"
] |
[
"General Classification",
"Model Selection",
"Topological Data Analysis"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-many-machines-can-we-use-in-parallel
|
1805.09948
| null | null |
How Many Machines Can We Use in Parallel Computing for Kernel Ridge Regression?
|
This paper aims to solve a basic problem in distributed statistical
inference: how many machines can we use in parallel computing? In kernel ridge
regression, we address this question in two important settings: nonparametric
estimation and hypothesis testing. Specifically, we find a range for the number
of machines under which optimal estimation/testing is achievable. The employed
empirical processes method provides a unified framework, that allows us to
handle various regression problems (such as thin-plate splines and
nonparametric additive regression) under different settings (such as
univariate, multivariate and diverging-dimensional designs). It is worth noting
that the upper bounds of the number of machines are proven to be un-improvable
(upto a logarithmic factor) in two important cases: smoothing spline regression
and Gaussian RKHS regression. Our theoretical findings are backed by thorough
numerical studies.
| null |
http://arxiv.org/abs/1805.09948v3
|
http://arxiv.org/pdf/1805.09948v3.pdf
| null |
[
"Meimei Liu",
"Zuofeng Shang",
"Guang Cheng"
] |
[
"regression",
"Two-sample testing"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/meta-transfer-learning-for-facial-emotion
|
1805.09946
| null | null |
Meta Transfer Learning for Facial Emotion Recognition
|
The use of deep learning techniques for automatic facial expression
recognition has recently attracted great interest but developed models are
still unable to generalize well due to the lack of large emotion datasets for
deep learning. To overcome this problem, in this paper, we propose utilizing a
novel transfer learning approach relying on PathNet and investigate how
knowledge can be accumulated within a given dataset and how the knowledge
captured from one emotion dataset can be transferred into another in order to
improve the overall performance. To evaluate the robustness of our system, we
have conducted various sets of experiments on two emotion datasets: SAVEE and
eNTERFACE. The experimental results demonstrate that our proposed system leads
to improvement in performance of emotion recognition and performs significantly
better than the recent state-of-the-art schemes adopting fine-\
tuning/pre-trained approaches.
| null |
http://arxiv.org/abs/1805.09946v1
|
http://arxiv.org/pdf/1805.09946v1.pdf
| null |
[
"Dung Nguyen",
"Kien Nguyen",
"Sridha Sridharan",
"Iman Abbasnejad",
"David Dean",
"Clinton Fookes"
] |
[
"Deep Learning",
"Emotion Recognition",
"Facial Emotion Recognition",
"Facial Expression Recognition",
"Facial Expression Recognition (FER)",
"Transfer Learning"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-reinforcement-learning-for-sequence-to
|
1805.09461
| null | null |
Deep Reinforcement Learning For Sequence to Sequence Models
|
In recent times, sequence-to-sequence (seq2seq) models have gained a lot of
popularity and provide state-of-the-art performance in a wide variety of tasks
such as machine translation, headline generation, text summarization, speech to
text conversion, and image caption generation. The underlying framework for all
these models is usually a deep neural network comprising an encoder and a
decoder. Although simple encoder-decoder models produce competitive results,
many researchers have proposed additional improvements over these
sequence-to-sequence models, e.g., using an attention-based model over the
input, pointer-generation models, and self-attention models. However, such
seq2seq models suffer from two common problems: 1) exposure bias and 2)
inconsistency between train/test measurement. Recently, a completely novel
point of view has emerged in addressing these two problems in seq2seq models,
leveraging methods from reinforcement learning (RL). In this survey, we
consider seq2seq problems from the RL point of view and provide a formulation
combining the power of RL methods in decision-making with sequence-to-sequence
models that enable remembering long-term memories. We present some of the most
recent frameworks that combine concepts from RL and deep neural networks and
explain how these two areas could benefit from each other in solving complex
seq2seq tasks. Our work aims to provide insights into some of the problems that
inherently arise with current approaches and how we can address them with
better RL models. We also provide the source code for implementing most of the
RL models discussed in this paper to support the complex task of abstractive
text summarization.
|
In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with sequence-to-sequence models that enable remembering long-term memories.
|
http://arxiv.org/abs/1805.09461v4
|
http://arxiv.org/pdf/1805.09461v4.pdf
| null |
[
"Yaser Keneshloo",
"Tian Shi",
"Naren Ramakrishnan",
"Chandan K. Reddy"
] |
[
"Abstractive Text Summarization",
"Caption Generation",
"Decision Making",
"Decoder",
"Deep Reinforcement Learning",
"Headline Generation",
"Machine Translation",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Speech-to-Text",
"Text Summarization"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector representation (a context vector), and then to use another LSTM, the *decoder*, to extract the output sequence\r\nfrom that vector. The second LSTM is essentially a recurrent neural network language model except that it is conditioned on the input sequence.\r\n\r\n(Note that this page refers to the original seq2seq not general sequence-to-sequence models)",
"full_name": "Sequence to Sequence",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Sequence To Sequence Models",
"parent": null
},
"name": "Seq2Seq",
"source_title": "Sequence to Sequence Learning with Neural Networks",
"source_url": "http://arxiv.org/abs/1409.3215v3"
}
] |
https://paperswithcode.com/paper/global-and-local-attention-networks-for
|
1805.08819
| null | null |
Learning what and where to attend
|
Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs). Because these networks are optimized for object recognition, they learn where to attend using only a weak form of supervision derived from image class labels. Here, we demonstrate the benefit of using stronger supervisory signals by teaching DCNs to attend to image regions that humans deem important for object recognition. We first describe a large-scale online experiment (ClickMe) used to supplement ImageNet with nearly half a million human-derived "top-down" attention maps. Using human psychophysics, we confirm that the identified top-down features from ClickMe are more diagnostic than "bottom-up" saliency features for rapid image categorization. As a proof of concept, we extend a state-of-the-art attention network and demonstrate that adding ClickMe supervision significantly improves its accuracy and yields visual features that are more interpretable and more similar to those used by human observers.
|
Most recent gains in visual recognition have originated from the inclusion of attention mechanisms in deep convolutional networks (DCNs).
|
https://arxiv.org/abs/1805.08819v4
|
https://arxiv.org/pdf/1805.08819v4.pdf
| null |
[
"Drew Linsley",
"Dan Shiebler",
"Sven Eberhardt",
"Thomas Serre"
] |
[
"Diagnostic",
"Image Categorization",
"Object Recognition"
] | 2018-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Most attention mechanisms learn where to focus using only weak supervisory signals from class labels, which inspired Linsley et al. to investigate how explicit human supervision can affect the performance and interpretability of attention models. As a proof of concept, Linsley et al. proposed the global-and-local attention (GALA) module, which extends an SE block with a spatial attention mechanism.\r\n\r\nGiven the input feature map $X$, GALA uses an attention mask that combines global and local attention to tell the network where and on what to focus. As in SE blocks, global attention aggregates global information by global average pooling and then produces a channel-wise attention weight vector using a multilayer perceptron. In local attention, two consecutive $1\\times 1$ convolutions are conducted on the input to produce a positional weight map. The outputs of the local and global pathways are combined by addition and multiplication. Formally, GALA can be represented as:\r\n\\begin{align}\r\n s_g &= W_{2} \\delta (W_{1}\\text{GAP}(x))\r\n\\end{align}\r\n\r\n\\begin{align}\r\n s_l &= Conv_2^{1\\times 1} (\\delta(Conv_1^{1\\times1}(X)))\r\n\\end{align}\r\n\r\n\\begin{align}\r\n s_g^* &= \\text{Expand}(s_g)\r\n\\end{align}\r\n\r\n\\begin{align}\r\n s_l^* &= \\text{Expand}(s_l) \r\n\\end{align}\r\n\r\n\\begin{align}\r\n s &= \\tanh(a(s_g^\\* + s_l^\\*) +m \\cdot (s_g^\\* s_l^\\*) )\r\n\\end{align}\r\n\r\n\\begin{align}\r\n Y &= sX\r\n\\end{align}\r\n\r\nwhere $a,m \\in \\mathbb{R}^{C}$ are learnable parameters representing channel-wise weight vectors. \r\n\r\nSupervised by human-provided feature importance maps, GALA has significantly improved representational power and can be combined with any CNN backbone.",
"full_name": "Global-and-Local attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "GALA",
"source_title": "Learning what and where to attend",
"source_url": "https://arxiv.org/abs/1805.08819v4"
}
] |
https://paperswithcode.com/paper/training-of-photonic-neural-networks-through
|
1805.09943
| null | null |
Training of photonic neural networks through in situ backpropagation
|
Recently, integrated optics has gained interest as a hardware platform for
implementing machine learning algorithms. Of particular interest are artificial
neural networks, since matrix-vector multi- plications, which are used heavily
in artificial neural networks, can be done efficiently in photonic circuits.
The training of an artificial neural network is a crucial step in its
application. However, currently on the integrated photonics platform there is
no efficient protocol for the training of these networks. In this work, we
introduce a method that enables highly efficient, in situ training of a
photonic neural network. We use adjoint variable methods to derive the photonic
analogue of the backpropagation algorithm, which is the standard method for
computing gradients of conventional neural networks. We further show how these
gradients may be obtained exactly by performing intensity measurements within
the device. As an application, we demonstrate the training of a numerically
simulated photonic artificial neural network. Beyond the training of photonic
machine learning implementations, our method may also be of broad interest to
experimental sensitivity analysis of photonic systems and the optimization of
reconfigurable optics platforms.
| null |
http://arxiv.org/abs/1805.09943v1
|
http://arxiv.org/pdf/1805.09943v1.pdf
| null |
[
"Tyler W. Hughes",
"Momchil Minkov",
"Yu Shi",
"Shanhui Fan"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/greedy-graph-searching-for-vascular-tracking
|
1805.09940
| null | null |
Greedy Graph Searching for Vascular Tracking in Angiographic Image Sequences
|
Vascular tracking of angiographic image sequences is one of the most
clinically important tasks in the diagnostic assessment and interventional
guidance of cardiac disease. However, this task can be challenging to
accomplish because of unsatisfactory angiography image quality and complex
vascular structures. Thus, this study proposed a new greedy graph search-based
method for vascular tracking. Each vascular branch is separated from the
vasculature and is tracked independently. Then, all branches are combined using
topology optimization, thereby resulting in complete vasculature tracking. A
gray-based image registration method was applied to determine the tracking
range, and the deformation field between two consecutive frames was calculated.
The vascular branch was described using a vascular centerline extraction method
with multi-probability fusion-based topology optimization. We introduce an
undirected acyclic graph establishment technique. A greedy search method was
proposed to acquire all possible paths in the graph that might match the
tracked vascular branch. The final tracking result was selected by branch
matching using dynamic time warping with a DAISY descriptor. The solution to
the problem reflected both the spatial and textural information between
successive frames. Experimental results demonstrated that the proposed method
was effective and robust for vascular tracking, attaining a F1 score of 0.89 on
a single branch dataset and 0.88 on a vessel tree dataset. This approach
provided a universal solution to address the problem of filamentary structure
tracking.
| null |
http://arxiv.org/abs/1805.09940v1
|
http://arxiv.org/pdf/1805.09940v1.pdf
| null |
[
"Huihui Fang",
"Jian Yang",
"Jianjun Zhu",
"Danni Ai",
"Yong Huang",
"Yurong Jiang",
"Hong Song",
"Yongtian Wang"
] |
[
"Diagnostic",
"Dynamic Time Warping",
"Image Registration"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automated-verification-of-neural-networks
|
1805.09938
| null | null |
Automated Verification of Neural Networks: Advances, Challenges and Perspectives
|
Neural networks are one of the most investigated and widely used techniques
in Machine Learning. In spite of their success, they still find limited
application in safety- and security-related contexts, wherein assurance about
networks' performances must be provided. In the recent past, automated
reasoning techniques have been proposed by several researchers to close the gap
between neural networks and applications requiring formal guarantees about
their behavior. In this work, we propose a primer of such techniques and a
comprehensive categorization of existing approaches for the automated
verification of neural networks. A discussion about current limitations and
directions for future investigation is provided to foster research on this
topic at the crossroads of Machine Learning and Automated Reasoning.
| null |
http://arxiv.org/abs/1805.09938v1
|
http://arxiv.org/pdf/1805.09938v1.pdf
| null |
[
"Francesco Leofante",
"Nina Narodytska",
"Luca Pulina",
"Armando Tacchella"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/polynomially-coded-regression-optimal
|
1805.09934
| null | null |
Polynomially Coded Regression: Optimal Straggler Mitigation via Data Encoding
|
We consider the problem of training a least-squares regression model on a
large dataset using gradient descent. The computation is carried out on a
distributed system consisting of a master node and multiple worker nodes. Such
distributed systems are significantly slowed down due to the presence of
slow-running machines (stragglers) as well as various communication
bottlenecks. We propose "polynomially coded regression" (PCR) that
substantially reduces the effect of stragglers and lessens the communication
burden in such systems. The key idea of PCR is to encode the partial data
stored at each worker, such that the computations at the workers can be viewed
as evaluating a polynomial at distinct points. This allows the master to
compute the final gradient by interpolating this polynomial. PCR significantly
reduces the recovery threshold, defined as the number of workers the master has
to wait for prior to computing the gradient. In particular, PCR requires a
recovery threshold that scales inversely proportionally with the amount of
computation/storage available at each worker. In comparison, state-of-the-art
straggler-mitigation schemes require a much higher recovery threshold that only
decreases linearly in the per worker computation/storage load. We prove that
PCR's recovery threshold is near minimal and within a factor two of the best
possible scheme. Our experiments over Amazon EC2 demonstrate that compared with
state-of-the-art schemes, PCR improves the run-time by 1.50x ~ 2.36x with
naturally occurring stragglers, and by as much as 2.58x ~ 4.29x with artificial
stragglers.
| null |
http://arxiv.org/abs/1805.09934v1
|
http://arxiv.org/pdf/1805.09934v1.pdf
| null |
[
"Songze Li",
"Seyed Mohammadreza Mousavi Kalan",
"Qian Yu",
"Mahdi Soltanolkotabi",
"A. Salman Avestimehr"
] |
[
"regression"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dsgan-generative-adversarial-training-for
|
1805.09929
| null | null |
DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction
|
Distant supervision can effectively label data for relation extraction, but
suffers from the noise labeling problem. Recent works mainly perform soft
bag-level noise reduction strategies to find the relatively better samples in a
sentence bag, which is suboptimal compared with making a hard decision of false
positive samples in sentence level. In this paper, we introduce an adversarial
learning framework, which we named DSGAN, to learn a sentence-level
true-positive generator. Inspired by Generative Adversarial Networks, we regard
the positive samples generated by the generator as the negative samples to
train the discriminator. The optimal generator is obtained until the
discrimination ability of the discriminator has the greatest decline. We adopt
the generator to filter distant supervision training dataset and redistribute
the false positive instances into the negative set, in which way to provide a
cleaned dataset for relation classification. The experimental results show that
the proposed strategy significantly improves the performance of distant
supervision relation extraction comparing to state-of-the-art systems.
| null |
http://arxiv.org/abs/1805.09929v1
|
http://arxiv.org/pdf/1805.09929v1.pdf
|
ACL 2018 7
|
[
"Pengda Qin",
"Weiran Xu",
"William Yang Wang"
] |
[
"Relation",
"Relation Classification",
"Relation Extraction",
"Sentence"
] | 2018-05-24T00:00:00 |
https://aclanthology.org/P18-1046
|
https://aclanthology.org/P18-1046.pdf
|
dsgan-generative-adversarial-training-for-1
| null |
[] |
https://paperswithcode.com/paper/robust-distant-supervision-relation
|
1805.09927
| null | null |
Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning
|
Distant supervision has become the standard method for relation extraction.
However, even though it is an efficient method, it does not come at no
cost---The resulted distantly-supervised training samples are often very noisy.
To combat the noise, most of the recent state-of-the-art approaches focus on
selecting one-best sentence or calculating soft attention weights over the set
of the sentences of one specific entity pair. However, these methods are
suboptimal, and the false positive problem is still a key stumbling bottleneck
for the performance. We argue that those incorrectly-labeled candidate
sentences must be treated with a hard decision, rather than being dealt with
soft attention weights. To do this, our paper describes a radical solution---We
explore a deep reinforcement learning strategy to generate the false-positive
indicator, where we automatically recognize false positives for each relation
type without any supervised information. Unlike the removal operation in the
previous studies, we redistribute them into the negative examples. The
experimental results show that the proposed strategy significantly improves the
performance of distant supervision comparing to state-of-the-art systems.
|
The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems.
|
http://arxiv.org/abs/1805.09927v1
|
http://arxiv.org/pdf/1805.09927v1.pdf
|
ACL 2018 7
|
[
"Pengda Qin",
"Weiran Xu",
"William Yang Wang"
] |
[
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Relation",
"Relation Extraction",
"Sentence"
] | 2018-05-24T00:00:00 |
https://aclanthology.org/P18-1199
|
https://aclanthology.org/P18-1199.pdf
|
robust-distant-supervision-relation-1
| null |
[] |
https://paperswithcode.com/paper/superconducting-optoelectronic-neurons-i-1
|
1805.01929
| null | null |
Superconducting Optoelectronic Neurons I: General Principles
|
The design of neural hardware is informed by the prominence of differentiated
processing and information integration in cognitive systems. The central role
of communication leads to the principal assumption of the hardware platform:
signals between neurons should be optical to enable fanout and communication
with minimal delay. The requirement of energy efficiency leads to the
utilization of superconducting detectors to receive single-photon signals. We
discuss the potential of superconducting optoelectronic hardware to achieve the
spatial and temporal information integration advantageous for cognitive
processing, and we consider physical scaling limits based on light-speed
communication. We introduce the superconducting optoelectronic neurons and
networks that are the subject of the subsequent papers in this series.
| null |
http://arxiv.org/abs/1805.01929v3
|
http://arxiv.org/pdf/1805.01929v3.pdf
| null |
[
"Jeffrey M. Shainline",
"Sonia M. Buckley",
"Adam N. McCaughan",
"Jeff Chiles",
"Richard P. Mirin",
"Sae Woo Nam"
] |
[] | 2018-05-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-task-determinantal-point-processes-for
|
1805.09916
| null | null |
Multi-Task Determinantal Point Processes for Recommendation
|
Determinantal point processes (DPPs) have received significant attention in
the recent years as an elegant model for a variety of machine learning tasks,
due to their ability to elegantly model set diversity and item quality or
popularity. Recent work has shown that DPPs can be effective models for product
recommendation and basket completion tasks. We present an enhanced DPP model
that is specialized for the task of basket completion, the multi-task DPP. We
view the basket completion problem as a multi-class classification problem, and
leverage ideas from tensor factorization and multi-class classification to
design the multi-task DPP model. We evaluate our model on several real-world
datasets, and find that the multi-task DPP provides significantly better
predictive quality than a number of state-of-the-art models.
| null |
http://arxiv.org/abs/1805.09916v2
|
http://arxiv.org/pdf/1805.09916v2.pdf
| null |
[
"Romain Warlop",
"Jérémie Mary",
"Mike Gartrell"
] |
[
"Diversity",
"General Classification",
"Multi-class Classification",
"Point Processes",
"Product Recommendation"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fairness-gan
|
1805.09910
| null | null |
Fairness GAN
|
In this paper, we introduce the Fairness GAN, an approach for generating a
dataset that is plausibly similar to a given multimedia dataset, but is more
fair with respect to protected attributes in allocative decision making. We
propose a novel auxiliary classifier GAN that strives for demographic parity or
equality of opportunity and show empirical results on several datasets,
including the CelebFaces Attributes (CelebA) dataset, the Quick, Draw!\
dataset, and a dataset of soccer player images and the offenses they were
called for. The proposed formulation is well-suited to absorbing unlabeled
data; we leverage this to augment the soccer dataset with the much larger
CelebA dataset. The methodology tends to improve demographic parity and
equality of opportunity while generating plausible images.
| null |
http://arxiv.org/abs/1805.09910v1
|
http://arxiv.org/pdf/1805.09910v1.pdf
| null |
[
"Prasanna Sattigeri",
"Samuel C. Hoffman",
"Vijil Chenthamarakshan",
"Kush R. Varshney"
] |
[
"Decision Making",
"Fairness"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks.",
"full_name": "Auxiliary Classifier",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "The following is a list of miscellaneous components used in neural networks.",
"name": "Miscellaneous Components",
"parent": null
},
"name": "Auxiliary Classifier",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/structure-learning-from-time-series-with
|
1805.09909
| null | null |
Structure Learning from Time Series with False Discovery Control
|
We consider the Granger causal structure learning problem from time series
data. Granger causal algorithms predict a 'Granger causal effect' between two
variables by testing if prediction error of one decreases significantly in the
absence of the other variable among the predictor covariates. Almost all
existing Granger causal algorithms condition on a large number of variables
(all but two variables) to test for effects between a pair of variables. We
propose a new structure learning algorithm called MMPC-p inspired by the well
known MMHC algorithm for non-time series data. We show that under some
assumptions, the algorithm provides false discovery rate control. The algorithm
is sound and complete when given access to perfect directed information testing
oracles. We also outline a novel tester for the linear Gaussian case. We show
through our extensive experiments that the MMPC-p algorithm scales to larger
problems and has improved statistical power compared to existing state of the
art for large sparse graphs. We also apply our algorithm on a global
development dataset and validate our findings with subject matter experts.
| null |
http://arxiv.org/abs/1805.09909v1
|
http://arxiv.org/pdf/1805.09909v1.pdf
| null |
[
"Bernat Guillen Pegueroles",
"Bhanukiran Vinzamuri",
"Karthikeyan Shanmugam",
"Steve Hedden",
"Jonathan D. Moyer",
"Kush R. Varshney"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/diffusion-maps-for-textual-network-embedding
|
1805.09906
| null | null |
Diffusion Maps for Textual Network Embedding
|
Textual network embedding leverages rich text information associated with the
network to learn low-dimensional vectorial representations of vertices. Rather
than using typical natural language processing (NLP) approaches, recent
research exploits the relationship of texts on the same edge to graphically
embed text. However, these models neglect to measure the complete level of
connectivity between any two texts in the graph. We present diffusion maps for
textual network embedding (DMTE), integrating global structural information of
the graph to capture the semantic relatedness between texts, with a
diffusion-convolution operation applied on the text inputs. In addition, a new
objective function is designed to efficiently preserve the high-order proximity
using the graph diffusion. Experimental results show that the proposed approach
outperforms state-of-the-art methods on the vertex-classification and
link-prediction tasks.
| null |
http://arxiv.org/abs/1805.09906v2
|
http://arxiv.org/pdf/1805.09906v2.pdf
|
NeurIPS 2018 12
|
[
"Xinyuan Zhang",
"Yitong Li",
"Dinghan Shen",
"Lawrence Carin"
] |
[
"General Classification",
"Link Prediction",
"Network Embedding"
] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7986-diffusion-maps-for-textual-network-embedding
|
http://papers.nips.cc/paper/7986-diffusion-maps-for-textual-network-embedding.pdf
|
diffusion-maps-for-textual-network-embedding-1
| null |
[] |
https://paperswithcode.com/paper/boolean-decision-rules-via-column-generation
|
1805.09901
| null | null |
Boolean Decision Rules via Column Generation
|
This paper considers the learning of Boolean rules in either disjunctive normal form (DNF, OR-of-ANDs, equivalent to decision rule sets) or conjunctive normal form (CNF, AND-of-ORs) as an interpretable model for classification. An integer program is formulated to optimally trade classification accuracy for rule simplicity. Column generation (CG) is used to efficiently search over an exponential number of candidate clauses (conjunctions or disjunctions) without the need for heuristic rule mining. This approach also bounds the gap between the selected rule set and the best possible rule set on the training data. To handle large datasets, we propose an approximate CG algorithm using randomization. Compared to three recently proposed alternatives, the CG algorithm dominates the accuracy-simplicity trade-off in 7 out of 15 datasets. When maximized for accuracy, CG is competitive with rule learners designed for this purpose, sometimes finding significantly simpler solutions that are no less accurate.
| null |
https://arxiv.org/abs/1805.09901v2
|
https://arxiv.org/pdf/1805.09901v2.pdf
|
NeurIPS 2018 12
|
[
"Sanjeeb Dash",
"Oktay Günlük",
"Dennis Wei"
] |
[
"General Classification"
] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7716-boolean-decision-rules-via-column-generation
|
http://papers.nips.cc/paper/7716-boolean-decision-rules-via-column-generation.pdf
|
boolean-decision-rules-via-column-generation-1
| null |
[] |
https://paperswithcode.com/paper/performing-co-membership-attacks-against-deep
|
1805.09898
| null | null |
Performing Co-Membership Attacks Against Deep Generative Models
|
In this paper we propose a new membership attack method called co-membership attacks against deep generative models including Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). Specifically, membership attack aims to check whether a given instance x was used in the training data or not. A co-membership attack checks whether the given bundle of n instances were in the training, with the prior knowledge that the bundle was either entirely used in the training or none at all. Successful membership attacks can compromise the privacy of training data when the generative model is published. Our main idea is to cast membership inference of target data x as the optimization of another neural network (called the attacker network) to search for the latent encoding to reproduce x. The final reconstruction error is used directly to conclude whether x was in the training data or not. We conduct extensive experiments on a variety of datasets and generative models showing that: our attacker network outperforms prior membership attacks; co-membership attacks can be substantially more powerful than single attacks; and VAEs are more susceptible to membership attacks compared to GANs.
| null |
https://arxiv.org/abs/1805.09898v3
|
https://arxiv.org/pdf/1805.09898v3.pdf
| null |
[
"Kin Sum Liu",
"Chaowei Xiao",
"Bo Li",
"Jie Gao"
] |
[] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/3d-sketching-using-multi-view-deep-volumetric
|
1707.08390
| null | null |
3D Sketching using Multi-View Deep Volumetric Prediction
|
Sketch-based modeling strives to bring the ease and immediacy of drawing to
the 3D world. However, while drawings are easy for humans to create, they are
very challenging for computers to interpret due to their sparsity and
ambiguity. We propose a data-driven approach that tackles this challenge by
learning to reconstruct 3D shapes from one or more drawings. At the core of our
approach is a deep convolutional neural network (CNN) that predicts occupancy
of a voxel grid from a line drawing. This CNN provides us with an initial 3D
reconstruction as soon as the user completes a single drawing of the desired
shape. We complement this single-view network with an updater CNN that refines
an existing prediction given a new drawing of the shape created from a novel
viewpoint. A key advantage of our approach is that we can apply the updater
iteratively to fuse information from an arbitrary number of viewpoints, without
requiring explicit stroke correspondences between the drawings. We train both
CNNs by rendering synthetic contour drawings from hand-modeled shape
collections as well as from procedurally-generated abstract shapes. Finally, we
integrate our CNNs in a minimal modeling interface that allows users to
seamlessly draw an object, rotate it to see its 3D reconstruction, and refine
it by re-drawing from another vantage point using the 3D reconstruction as
guidance. The main strengths of our approach are its robustness to freehand
bitmap drawings, its ability to adapt to different object categories, and the
continuum it offers between single-view and multi-view sketch-based modeling.
| null |
http://arxiv.org/abs/1707.08390v4
|
http://arxiv.org/pdf/1707.08390v4.pdf
| null |
[
"Johanna Delanoy",
"Mathieu Aubry",
"Phillip Isola",
"Alexei A. Efros",
"Adrien Bousseau"
] |
[
"3D Reconstruction",
"Prediction"
] | 2017-07-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalized-scene-reconstruction
|
1803.08496
| null | null |
Generalized Scene Reconstruction
|
A new passive approach called Generalized Scene Reconstruction (GSR) enables
"generalized scenes" to be effectively reconstructed. Generalized scenes are
defined to be "boundless" spaces that include non-Lambertian, partially
transmissive, textureless and finely-structured matter. A new data structure
called a plenoptic octree is introduced to enable efficient (database-like)
light and matter field reconstruction in devices such as mobile phones,
augmented reality (AR) glasses and drones. To satisfy threshold requirements
for GSR accuracy, scenes are represented as systems of partially polarized
light, radiometrically interacting with matter. To demonstrate GSR, a prototype
imaging polarimeter is used to reconstruct (in generalized light fields) highly
reflective, hail-damaged automobile body panels. Follow-on GSR experiments are
described.
| null |
http://arxiv.org/abs/1803.08496v3
|
http://arxiv.org/pdf/1803.08496v3.pdf
| null |
[
"John K. Leffingwell",
"Donald J. Meagher",
"Khan W. Mahmud",
"Scott Ackerson"
] |
[] | 2018-03-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/attenuation-correction-for-brain-pet-imaging
|
1712.06203
| null | null |
Attenuation correction for brain PET imaging using deep neural network based on dixon and ZTE MR images
|
Positron Emission Tomography (PET) is a functional imaging modality widely
used in neuroscience studies. To obtain meaningful quantitative results from
PET images, attenuation correction is necessary during image reconstruction.
For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance
(MR) images do not reflect attenuation coefficients directly. To address this
issue, we present deep neural network methods to derive the continuous
attenuation coefficients for brain PET imaging from MR images. With only Dixon
MR images as the network input, the existing U-net structure was adopted and
analysis using forty patient data sets shows it is superior than other Dixon
based methods. When both Dixon and zero echo time (ZTE) images are available,
we have proposed a modified U-net structure, named GroupU-net, to efficiently
make use of both Dixon and ZTE information through group convolution modules
when the network goes deeper. Quantitative analysis based on fourteen real
patient data sets demonstrates that both network approaches can perform better
than the standard methods, and the proposed network structure can further
reduce the PET quantification error compared to the U-net structure.
| null |
http://arxiv.org/abs/1712.06203v2
|
http://arxiv.org/pdf/1712.06203v2.pdf
| null |
[
"Kuang Gong",
"Jaewon Yang",
"Kyungsang Kim",
"Georges El Fakhri",
"Youngho Seo",
"Quanzheng Li"
] |
[
"Image Reconstruction"
] | 2017-12-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/on-the-computational-complexity-of-model
|
1805.09880
| null | null |
On the Computational Complexity of Model Checking for Dynamic Epistemic Logic with S5 Models
|
Dynamic epistemic logic (DEL) is a logical framework for representing and reasoning about knowledge change for multiple agents. An important computational task in this framework is the model checking problem, which has been shown to be PSPACE-hard even for S5 models and two agents---in the presence of other features, such as multi-pointed models. We answer open questions in the literature about the complexity of this problem in more restricted settings. We provide a detailed complexity analysis of the model checking problem for DEL, where we consider various combinations of restrictions, such as the number of agents, whether the models are single-pointed or multi-pointed, and whether postconditions are allowed in the updates. In particular, we show that the problem is already PSPACE-hard in (1) the case of one agent, multi-pointed S5 models, and no postconditions, and (2) the case of two agents, only single-pointed S5 models, and no postconditions. In addition, we study the setting where only semi-private announcements are allowed as updates. We show that for this case the problem is already PSPACE-hard when restricted to two agents and three propositional variables. The results that we obtain in this paper help outline the exact boundaries of the restricted settings for which the model checking problem for DEL is computationally tractable.
| null |
https://arxiv.org/abs/1805.09880v2
|
https://arxiv.org/pdf/1805.09880v2.pdf
| null |
[
"Ronald de Haan",
"Iris van de Pol"
] |
[] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-nonlinear-brain-dynamics-van-der-pol
|
1805.09874
| null | null |
Learning Nonlinear Brain Dynamics: van der Pol Meets LSTM
|
Many real-world data sets, especially in biology, are produced by complex nonlinear dynamical systems. In this paper, we focus on brain calcium imaging (CaI) of different organisms (zebrafish and rat), aiming to build a model of joint activation dynamics in large neuronal populations, including the whole brain of zebrafish. We propose a new approach for capturing dynamics of temporal SVD components that uses the coupled (multivariate) van der Pol (VDP) oscillator, a nonlinear ordinary differential equation (ODE) model describing neural activity, with a new parameter estimation technique that combines variable projection optimization and stochastic search. We show that the approach successfully handles nonlinearities and hidden state variables in the coupled VDP. The approach is accurate, achieving 0.82 to 0.94 correlation between the actual and model-generated components, and interpretable, as VDP's coupling matrix reveals anatomically meaningful positive (excitatory) and negative (inhibitory) interactions across different brain subsystems corresponding to spatial SVD components. Moreover, VDP is comparable to (or sometimes better than) recurrent neural networks (LSTM) for (short-term) prediction of future brain activity; VDP needs less parameters to train, which was a plus on our small training data. Finally, the overall best predictive method, greatly outperforming both VDP and LSTM in short- and long-term predictive settings on both datasets, was the new hybrid VDP-LSTM approach that used VDP to simulate large domain-specific dataset for LSTM pretraining; note that simple LSTM data-augmentation via noisy versions of training data was much less effective.
| null |
https://arxiv.org/abs/1805.09874v2
|
https://arxiv.org/pdf/1805.09874v2.pdf
| null |
[
"German Abrevaya",
"Irina Rish",
"Aleksandr Y. Aravkin",
"Guillermo Cecchi",
"James Kozloski",
"Pablo Polosecki",
"Peng Zheng",
"Silvina Ponce Dawson",
"Juliana Rhee",
"David Cox"
] |
[
"Data Augmentation",
"parameter estimation",
"Time Series Analysis"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/confidence-region-of-singular-subspaces-for
|
1805.09871
| null | null |
Confidence Region of Singular Subspaces for Low-rank Matrix Regression
|
Low-rank matrix regression refers to the instances of recovering a low-rank
matrix based on specially designed measurements and the corresponding noisy
outcomes. In the last decade, numerous statistical methodologies have been
developed for efficiently recovering the unknown low-rank matrices. However, in
some applications, the unknown singular subspace is scientifically more
important than the low-rank matrix itself. In this article, we revisit the
low-rank matrix regression model and introduce a two-step procedure to
construct confidence regions of the singular subspace. The procedure involves
the de-biasing for the typical low-rank estimators after which we calculate the
empirical singular vectors. We investigate the distribution of the joint
projection distance between the empirical singular subspace and the unknown
true singular subspace. We specifically prove the asymptotical normality of the
joint projection distance with data-dependent centering and normalization when
$r^{3/2}(m_1+m_2)^{3/2}=o(n/\log n)$ where $m_1, m_2$ denote the matrix row and
column sizes, $r$ is the rank and $n$ is the number of independent random
measurements. Consequently, we propose data-dependent confidence regions of the
true singular subspace which attains any pre-determined confidence level
asymptotically. In addition, non-asymptotical convergence rates are also
established. Numerical results are presented to demonstrate the merits of our
methods.
| null |
http://arxiv.org/abs/1805.09871v3
|
http://arxiv.org/pdf/1805.09871v3.pdf
| null |
[
"Dong Xia"
] |
[
"regression"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pooling-of-causal-models-under-counterfactual
|
1805.09866
| null | null |
Pooling of Causal Models under Counterfactual Fairness via Causal Judgement Aggregation
|
In this paper we consider the problem of combining multiple probabilistic
causal models, provided by different experts, under the requirement that the
aggregated model satisfy the criterion of counterfactual fairness. We build
upon the work on causal models and fairness in machine learning, and we express
the problem of combining multiple models within the framework of opinion
pooling. We propose two simple algorithms, grounded in the theory of
counterfactual fairness and causal judgment aggregation, that are guaranteed to
generate aggregated probabilistic causal models respecting the criterion of
fairness, and we compare their behaviors on a toy case study.
| null |
http://arxiv.org/abs/1805.09866v2
|
http://arxiv.org/pdf/1805.09866v2.pdf
| null |
[
"Fabio Massimo Zennaro",
"Magdalena Ivanovska"
] |
[
"BIG-bench Machine Learning",
"Causal Judgment",
"counterfactual",
"Fairness"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inverse-pomdp-inferring-what-you-think-from
|
1805.09864
| null | null |
Inverse Rational Control: Inferring What You Think from How You Forage
|
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning. Inferring an agent's internal model is a crucial ingredient in social interactions (theory of mind), for imitation learning, and for interpreting neural activities of behaving agents. Here we describe a generic method to model an agent's behavior under an environment with uncertainty, and infer the agent's internal model, reward function, and dynamic beliefs. We apply our method to a simulated agent performing a naturalistic foraging task. We assume the agent behaves rationally --- that is, they take actions that optimize their subjective utility according to their understanding of the task and its relevant causal variables. We model this rational solution as a Partially Observable Markov Decision Process (POMDP) where the agent may make wrong assumptions about the task parameters. Given the agent's sensory observations and actions, we learn its internal model and reward function by maximum likelihood estimation over a set of task-relevant parameters. The Markov property of the POMDP enables us to characterize the transition probabilities between internal belief states and iteratively estimate the agent's policy using a constrained Expectation-Maximization (EM) algorithm. We validate our method on simulated agents performing suboptimally on a foraging task currently used in many neuroscience experiments, and successfully recover their internal model and reward function. Our work lays a critical foundation to discover how the brain represents and computes with dynamic beliefs.
| null |
https://arxiv.org/abs/1805.09864v4
|
https://arxiv.org/pdf/1805.09864v4.pdf
| null |
[
"Zhengwei Wu",
"Paul Schrater",
"Xaq Pitkow"
] |
[
"Imitation Learning"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-network-quine
|
1803.05859
| null | null |
Neural Network Quine
|
Self-replication is a key aspect of biological life that has been largely
overlooked in Artificial Intelligence systems. Here we describe how to build
and train self-replicating neural networks. The network replicates itself by
learning to output its own weights. The network is designed using a loss
function that can be optimized with either gradient-based or non-gradient-based
methods. We also describe a method we call regeneration to train the network
without explicit optimization, by injecting the network with predictions of its
own parameters. The best solution for a self-replicating network was found by
alternating between regeneration and optimization steps. Finally, we describe a
design for a self-replicating neural network that can solve an auxiliary task
such as MNIST image classification. We observe that there is a trade-off
between the network's ability to classify images and its ability to replicate,
but training is biased towards increasing its specialization at image
classification at the expense of replication. This is analogous to the
trade-off between reproduction and other tasks observed in nature. We suggest
that a self-replication mechanism for artificial intelligence is useful because
it introduces the possibility of continual improvement through natural
selection.
|
We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters.
|
http://arxiv.org/abs/1803.05859v4
|
http://arxiv.org/pdf/1803.05859v4.pdf
| null |
[
"Oscar Chang",
"Hod Lipson"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2018-03-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scale-robust-localization-using-general
|
1710.10466
| null | null |
Scale-Robust Localization Using General Object Landmarks
|
Visual localization under large changes in scale is an important capability
in many robotic mapping applications, such as localizing at low altitudes in
maps built at high altitudes, or performing loop closure over long distances.
Existing approaches, however, are robust only up to about a 3x difference in
scale between map and query images.
We propose a novel combination of deep-learning-based object features and
state-of-the-art SIFT point-features that yields improved robustness to scale
change. This technique is training-free and class-agnostic, and in principle
can be deployed in any environment out-of-the-box. We evaluate the proposed
technique on the KITTI Odometry benchmark and on a novel dataset of outdoor
images exhibiting changes in visual scale of $7\times$ and greater, which we
have released to the public. Our technique consistently outperforms
localization using either SIFT features or the proposed object features alone,
achieving both greater accuracy and much lower failure rates under large
changes in scale.
| null |
http://arxiv.org/abs/1710.10466v2
|
http://arxiv.org/pdf/1710.10466v2.pdf
| null |
[
"Andrew Holliday",
"Gregory Dudek"
] |
[
"Object",
"Visual Localization"
] | 2017-10-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gradient-regularization-improves-accuracy-of
|
1712.09936
| null | null |
Gradient Regularization Improves Accuracy of Discriminative Models
|
Regularizing the gradient norm of the output of a neural network with respect
to its inputs is a powerful technique, rediscovered several times. This paper
presents evidence that gradient regularization can consistently improve
classification accuracy on vision tasks, using modern deep neural networks,
especially when the amount of training data is small. We introduce our
regularizers as members of a broader class of Jacobian-based regularizers. We
demonstrate empirically on real and synthetic data that the learning process
leads to gradients controlled beyond the training points, and results in
solutions that generalize well.
| null |
http://arxiv.org/abs/1712.09936v2
|
http://arxiv.org/pdf/1712.09936v2.pdf
| null |
[
"Dániel Varga",
"Adrián Csiszárik",
"Zsolt Zombori"
] |
[
"General Classification"
] | 2017-12-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/baseline-needs-more-love-on-simple-word
|
1805.09843
| null | null |
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
|
Many deep learning architectures have been proposed to model the
compositionality in text sequences, requiring a substantial number of
parameters and expensive computations. However, there has not been a rigorous
evaluation regarding the added value of sophisticated compositional functions.
In this paper, we conduct a point-by-point comparative study between Simple
Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling
operations, relative to word-embedding-based RNN/CNN models. Surprisingly,
SWEMs exhibit comparable or even superior performance in the majority of cases
considered. Based upon this understanding, we propose two additional pooling
strategies over learned word embeddings: (i) a max-pooling operation for
improved interpretability; and (ii) a hierarchical pooling operation, which
preserves spatial (n-gram) information within text sequences. We present
experiments on 17 datasets encompassing three tasks: (i) (long) document
classification; (ii) text sequence matching; and (iii) short text tasks,
including classification and tagging. The source code and datasets can be
obtained from https:// github.com/dinghanshen/SWEM.
|
Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations.
|
http://arxiv.org/abs/1805.09843v1
|
http://arxiv.org/pdf/1805.09843v1.pdf
|
ACL 2018 7
|
[
"Dinghan Shen",
"Guoyin Wang",
"Wenlin Wang",
"Martin Renqiang Min",
"Qinliang Su",
"Yizhe Zhang",
"Chunyuan Li",
"Ricardo Henao",
"Lawrence Carin"
] |
[
"Document Classification",
"General Classification",
"Named Entity Recognition (NER)",
"Sentiment Analysis",
"Subjectivity Analysis",
"Text Classification",
"Word Embeddings"
] | 2018-05-24T00:00:00 |
https://aclanthology.org/P18-1041
|
https://aclanthology.org/P18-1041.pdf
|
baseline-needs-more-love-on-simple-word-1
| null |
[] |
https://paperswithcode.com/paper/stereo-magnification-learning-view-synthesis
|
1805.09817
| null | null |
Stereo Magnification: Learning View Synthesis using Multiplane Images
|
The view synthesis problem--generating novel views of a scene from known
imagery--has garnered recent attention due in part to compelling applications
in virtual and augmented reality. In this paper, we explore an intriguing
scenario for view synthesis: extrapolating views from imagery captured by
narrow-baseline stereo cameras, including VR cameras and now-widespread
dual-lens camera phones. We call this problem stereo magnification, and propose
a learning framework that leverages a new layered representation that we call
multiplane images (MPIs). Our method also uses a massive new data source for
learning view extrapolation: online videos on YouTube. Using data mined from
such videos, we train a deep network that predicts an MPI from an input stereo
image pair. This inferred MPI can then be used to synthesize a range of novel
views of the scene, including views that extrapolate significantly beyond the
input baseline. We show that our method compares favorably with several recent
view synthesis methods, and demonstrate applications in magnifying
narrow-baseline stereo images.
|
The view synthesis problem--generating novel views of a scene from known imagery--has garnered recent attention due in part to compelling applications in virtual and augmented reality.
|
http://arxiv.org/abs/1805.09817v1
|
http://arxiv.org/pdf/1805.09817v1.pdf
| null |
[
"Tinghui Zhou",
"Richard Tucker",
"John Flynn",
"Graham Fyffe",
"Noah Snavely"
] |
[
"Novel View Synthesis"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/model-independent-online-learning-for
|
1703.00557
| null | null |
Model-Independent Online Learning for Influence Maximization
|
We consider influence maximization (IM) in social networks, which is the
problem of maximizing the number of users that become aware of a product by
selecting a set of "seed" users to expose the product to. While prior work
assumes a known model of information diffusion, we propose a novel
parametrization that not only makes our framework agnostic to the underlying
diffusion model, but also statistically efficient to learn from data. We give a
corresponding monotone, submodular surrogate function, and show that it is a
good approximation to the original IM objective. We also consider the case of a
new marketer looking to exploit an existing social network, while
simultaneously learning the factors governing information propagation. For
this, we propose a pairwise-influence semi-bandit feedback model and develop a
LinUCB-based bandit algorithm. Our model-independent analysis shows that our
regret bound has a better (as compared to previous work) dependence on the size
of the network. Experimental evaluation suggests that our framework is robust
to the underlying diffusion model and can efficiently learn a near-optimal
solution.
| null |
http://arxiv.org/abs/1703.00557v2
|
http://arxiv.org/pdf/1703.00557v2.pdf
|
ICML 2017 8
|
[
"Sharan Vaswani",
"Branislav Kveton",
"Zheng Wen",
"Mohammad Ghavamzadeh",
"Laks Lakshmanan",
"Mark Schmidt"
] |
[
"model"
] | 2017-03-01T00:00:00 |
https://icml.cc/Conferences/2017/Schedule?showEvent=639
|
http://proceedings.mlr.press/v70/vaswani17a/vaswani17a.pdf
|
model-independent-online-learning-for-1
| null |
[] |
https://paperswithcode.com/paper/competitive-collaboration-joint-unsupervised
|
1805.09806
| null | null |
Competitive Collaboration: Joint Unsupervised Learning of Depth, Camera Motion, Optical Flow and Motion Segmentation
|
We address the unsupervised learning of several interconnected problems in
low-level vision: single view depth prediction, camera motion estimation,
optical flow, and segmentation of a video into the static scene and moving
regions. Our key insight is that these four fundamental vision problems are
coupled through geometric constraints. Consequently, learning to solve them
together simplifies the problem because the solutions can reinforce each other.
We go beyond previous work by exploiting geometry more explicitly and
segmenting the scene into static and moving regions. To that end, we introduce
Competitive Collaboration, a framework that facilitates the coordinated
training of multiple specialized neural networks to solve complex problems.
Competitive Collaboration works much like expectation-maximization, but with
neural networks that act as both competitors to explain pixels that correspond
to static or moving regions, and as collaborators through a moderator that
assigns pixels to be either static or independently moving. Our novel method
integrates all these problems in a common framework and simultaneously reasons
about the segmentation of the scene into moving objects and the static
background, the camera motion, depth of the static scene structure, and the
optical flow of moving objects. Our model is trained without any supervision
and achieves state-of-the-art performance among joint unsupervised methods on
all sub-problems.
|
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions.
|
http://arxiv.org/abs/1805.09806v3
|
http://arxiv.org/pdf/1805.09806v3.pdf
|
CVPR 2019 6
|
[
"Anurag Ranjan",
"Varun Jampani",
"Lukas Balles",
"Kihwan Kim",
"Deqing Sun",
"Jonas Wulff",
"Michael J. Black"
] |
[
"Depth Estimation",
"Depth Prediction",
"Monocular Depth Estimation",
"Motion Estimation",
"Motion Segmentation",
"Optical Flow Estimation"
] | 2018-05-24T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Ranjan_Competitive_Collaboration_Joint_Unsupervised_Learning_of_Depth_Camera_Motion_Optical_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Ranjan_Competitive_Collaboration_Joint_Unsupervised_Learning_of_Depth_Camera_Motion_Optical_CVPR_2019_paper.pdf
|
competitive-collaboration-joint-unsupervised-1
| null |
[] |
https://paperswithcode.com/paper/implicit-autoencoders
|
1805.09804
| null |
HyMRaoAqKX
|
Implicit Autoencoders
|
In this paper, we describe the "implicit autoencoder" (IAE), a generative
autoencoder in which both the generative path and the recognition path are
parametrized by implicit distributions. We use two generative adversarial
networks to define the reconstruction and the regularization cost functions of
the implicit autoencoder, and derive the learning rules based on
maximum-likelihood learning. Using implicit distributions allows us to learn
more expressive posterior and conditional likelihood distributions for the
autoencoder. Learning an expressive conditional likelihood distribution enables
the latent code to only capture the abstract and high-level information of the
data, while the remaining low-level information is captured by the implicit
conditional likelihood distribution. We show the applications of implicit
autoencoders in disentangling content and style information, clustering,
semi-supervised classification, learning expressive variational distributions,
and multimodal image-to-image translation from unpaired data.
| null |
http://arxiv.org/abs/1805.09804v2
|
http://arxiv.org/pdf/1805.09804v2.pdf
|
ICLR 2019 5
|
[
"Alireza Makhzani"
] |
[
"Clustering",
"Image-to-Image Translation",
"Translation"
] | 2018-05-24T00:00:00 |
https://openreview.net/forum?id=HyMRaoAqKX
|
https://openreview.net/pdf?id=HyMRaoAqKX
|
implicit-autoencoders-1
| null |
[] |
https://paperswithcode.com/paper/meta-gradient-reinforcement-learning
|
1805.09801
| null | null |
Meta-Gradient Reinforcement Learning
|
The goal of reinforcement learning algorithms is to estimate and/or optimise
the value function. However, unlike supervised learning, no teacher or oracle
is available to provide the true value function. Instead, the majority of
reinforcement learning algorithms estimate and/or optimise a proxy for the
value function. This proxy is typically based on a sampled and bootstrapped
approximation to the true value function, known as a return. The particular
choice of return is one of the chief components determining the nature of the
algorithm: the rate at which future rewards are discounted; when and how values
should be bootstrapped; or even the nature of the rewards themselves. It is
well-known that these decisions are crucial to the overall success of RL
algorithms. We discuss a gradient-based meta-learning algorithm that is able to
adapt the nature of the return, online, whilst interacting and learning from
the environment. When applied to 57 games on the Atari 2600 environment over
200 million frames, our algorithm achieved a new state-of-the-art performance.
|
Instead, the majority of reinforcement learning algorithms estimate and/or optimise a proxy for the value function.
|
http://arxiv.org/abs/1805.09801v1
|
http://arxiv.org/pdf/1805.09801v1.pdf
|
NeurIPS 2018 12
|
[
"Zhongwen Xu",
"Hado van Hasselt",
"David Silver"
] |
[
"Meta-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7507-meta-gradient-reinforcement-learning
|
http://papers.nips.cc/paper/7507-meta-gradient-reinforcement-learning.pdf
|
meta-gradient-reinforcement-learning-1
| null |
[] |
https://paperswithcode.com/paper/prediction-of-autism-treatment-response-from
|
1805.09799
| null | null |
Prediction of Autism Treatment Response from Baseline fMRI using Random Forests and Tree Bagging
|
Treating children with autism spectrum disorders (ASD) with behavioral
interventions, such as Pivotal Response Treatment (PRT), has shown promise in
recent studies. However, deciding which therapy is best for a given patient is
largely by trial and error, and choosing an ineffective intervention results in
loss of valuable treatment time. We propose predicting patient response to PRT
from baseline task-based fMRI by the novel application of a random forest and
tree bagging strategy. Our proposed learning pipeline uses random forest
regression to determine candidate brain voxels that may be informative in
predicting treatment response. The candidate voxels are then tested stepwise
for inclusion in a bagged tree ensemble. After the predictive model is
constructed, bias correction is performed to further increase prediction
accuracy. Using data from 19 ASD children who underwent a 16 week trial of PRT
and a leave-one-out cross-validation framework, the presented learning pipeline
was tested against several standard methods and variations of the pipeline and
resulted in the highest prediction accuracy.
| null |
http://arxiv.org/abs/1805.09799v1
|
http://arxiv.org/pdf/1805.09799v1.pdf
| null |
[
"Nicha C. Dvornek",
"Daniel Yang",
"Archana Venkataraman",
"Pamela Ventola",
"Lawrence H. Staib",
"Kevin A. Pelphrey",
"James S. Duncan"
] |
[] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/new-insights-into-bootstrapping-for-bandits
|
1805.09793
| null | null |
New Insights into Bootstrapping for Bandits
|
We investigate the use of bootstrapping in the bandit setting. We first show
that the commonly used non-parametric bootstrapping (NPB) procedure can be
provably inefficient and establish a near-linear lower bound on the regret
incurred by it under the bandit model with Bernoulli rewards. We show that NPB
with an appropriate amount of forced exploration can result in sub-linear
albeit sub-optimal regret. As an alternative to NPB, we propose a weighted
bootstrapping (WB) procedure. For Bernoulli rewards, WB with multiplicative
exponential weights is mathematically equivalent to Thompson sampling (TS) and
results in near-optimal regret bounds. Similarly, in the bandit setting with
Gaussian rewards, we show that WB with additive Gaussian weights achieves
near-optimal regret. Beyond these special cases, we show that WB leads to
better empirical performance than TS for several reward distributions bounded
on $[0,1]$. For the contextual bandit setting, we give practical guidelines
that make bootstrapping simple and efficient to implement and result in good
empirical performance on real-world datasets.
| null |
http://arxiv.org/abs/1805.09793v1
|
http://arxiv.org/pdf/1805.09793v1.pdf
| null |
[
"Sharan Vaswani",
"Branislav Kveton",
"Zheng Wen",
"Anup Rao",
"Mark Schmidt",
"Yasin Abbasi-Yadkori"
] |
[
"Thompson Sampling"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/mchelali/TemporalStability",
"description": "Spatio-temporal features extraction that measure the stabilty. The proposed method is based on a compression algorithm named Run Length Encoding. The workflow of the method is presented bellow.",
"full_name": "Spatio-temporal stability analysis",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Feature Extractors** for object detection are modules used to construct features that can be used for detecting objects. They address issues such as the need to detect multiple-sized objects in an image (and the need to have representations that are suitable for the different scales).",
"name": "Feature Extractors",
"parent": null
},
"name": "TS",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/multi-task-zipping-via-layer-wise-neuron
|
1805.09791
| null | null |
Multi-Task Zipping via Layer-wise Neuron Sharing
|
Future mobile devices are anticipated to perceive, understand and react to
the world on their own by running multiple correlated deep neural networks
on-device. Yet the complexity of these neural networks needs to be trimmed down
both within-model and cross-model to fit in mobile storage and memory. Previous
studies focus on squeezing the redundancy within a single neural network. In
this work, we aim to reduce the redundancy across multiple models. We propose
Multi-Task Zipping (MTZ), a framework to automatically merge correlated,
pre-trained deep neural networks for cross-model compression. Central in MTZ is
a layer-wise neuron sharing and incoming weight updating scheme that induces a
minimal change in the error function. MTZ inherits information from each model
and demands light retraining to re-boost the accuracy of individual tasks.
Evaluations show that MTZ is able to fully merge the hidden layers of two
VGG-16 networks with a 3.18% increase in the test error averaged on ImageNet
and CelebA, or share 39.61% parameters between the two networks with <0.5%
increase in the test errors for both tasks. The number of iterations to retrain
the combined network is at least 17.8 times lower than that of training a
single VGG-16 network. Moreover, experiments show that MTZ is also able to
effectively merge multiple residual networks.
| null |
http://arxiv.org/abs/1805.09791v2
|
http://arxiv.org/pdf/1805.09791v2.pdf
|
NeurIPS 2018 12
|
[
"Xiaoxi He",
"Zimu Zhou",
"Lothar Thiele"
] |
[
"Model Compression"
] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7841-multi-task-zipping-via-layer-wise-neuron-sharing
|
http://papers.nips.cc/paper/7841-multi-task-zipping-via-layer-wise-neuron-sharing.pdf
|
multi-task-zipping-via-layer-wise-neuron-1
| null |
[] |
https://paperswithcode.com/paper/improving-landmark-localization-with-semi
|
1709.01591
| null | null |
Improving Landmark Localization with Semi-Supervised Learning
|
We present two techniques to improve landmark localization in images from
partially annotated datasets. Our primary goal is to leverage the common
situation where precise landmark locations are only provided for a small data
subset, but where class labels for classification or regression tasks related
to the landmarks are more abundantly available. First, we propose the framework
of sequential multitasking and explore it here through an architecture for
landmark localization where training with class labels acts as an auxiliary
signal to guide the landmark localization on unlabeled data. A key aspect of
our approach is that errors can be backpropagated through a complete landmark
localization model. Second, we propose and explore an unsupervised learning
technique for landmark localization based on having a model predict equivariant
landmarks with respect to transformations applied to the image. We show that
these techniques, improve landmark prediction considerably and can learn
effective detectors even when only a small fraction of the dataset has landmark
labels. We present results on two toy datasets and four real datasets, with
hands and faces, and report new state-of-the-art on two datasets in the wild,
e.g. with only 5\% of labeled images we outperform previous state-of-the-art
trained on the AFLW dataset.
| null |
http://arxiv.org/abs/1709.01591v7
|
http://arxiv.org/pdf/1709.01591v7.pdf
|
CVPR 2018 6
|
[
"Sina Honari",
"Pavlo Molchanov",
"Stephen Tyree",
"Pascal Vincent",
"Christopher Pal",
"Jan Kautz"
] |
[
"Face Alignment",
"Small Data Image Classification"
] | 2017-09-05T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Honari_Improving_Landmark_Localization_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Honari_Improving_Landmark_Localization_CVPR_2018_paper.pdf
|
improving-landmark-localization-with-semi-1
| null |
[] |
https://paperswithcode.com/paper/style-transfer-through-back-translation
|
1804.09000
| null | null |
Style Transfer Through Back-Translation
|
Style transfer is the task of rephrasing the text to contain specific
stylistic properties without changing the intent or affect within the context.
This paper introduces a new method for automatic style transfer. We first learn
a latent representation of the input sentence which is grounded in a language
translation model in order to better preserve the meaning of the sentence while
reducing stylistic properties. Then adversarial generation techniques are used
to make the output match the desired style. We evaluate this technique on three
different style transformations: sentiment, gender and political slant.
Compared to two state-of-the-art style transfer modeling techniques we show
improvements both in automatic evaluation of style transfer and in manual
evaluation of meaning preservation and fluency.
|
We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties.
|
http://arxiv.org/abs/1804.09000v3
|
http://arxiv.org/pdf/1804.09000v3.pdf
|
ACL 2018 7
|
[
"Shrimai Prabhumoye",
"Yulia Tsvetkov",
"Ruslan Salakhutdinov",
"Alan W. black"
] |
[
"Sentence",
"Style Transfer",
"Text Style Transfer",
"Translation"
] | 2018-04-24T00:00:00 |
https://aclanthology.org/P18-1080
|
https://aclanthology.org/P18-1080.pdf
|
style-transfer-through-back-translation-1
| null |
[] |
https://paperswithcode.com/paper/hyperbolic-attention-networks
|
1805.09786
| null |
rJxHsjRqFQ
|
Hyperbolic Attention Networks
|
We introduce hyperbolic attention networks to endow neural networks with
enough capacity to match the complexity of data with hierarchical and power-law
structure. A few recent approaches have successfully demonstrated the benefits
of imposing hyperbolic geometry on the parameters of shallow networks. We
extend this line of work by imposing hyperbolic geometry on the activations of
neural networks. This allows us to exploit hyperbolic geometry to reason about
embeddings produced by deep networks. We achieve this by re-expressing the
ubiquitous mechanism of soft attention in terms of operations defined for
hyperboloid and Klein models. Our method shows improvements in terms of
generalization on neural machine translation, learning on graphs and visual
question answering tasks while keeping the neural representations compact.
| null |
http://arxiv.org/abs/1805.09786v1
|
http://arxiv.org/pdf/1805.09786v1.pdf
|
ICLR 2019 5
|
[
"Caglar Gulcehre",
"Misha Denil",
"Mateusz Malinowski",
"Ali Razavi",
"Razvan Pascanu",
"Karl Moritz Hermann",
"Peter Battaglia",
"Victor Bapst",
"David Raposo",
"Adam Santoro",
"Nando de Freitas"
] |
[
"Machine Translation",
"Question Answering",
"Translation",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-05-24T00:00:00 |
https://openreview.net/forum?id=rJxHsjRqFQ
|
https://openreview.net/pdf?id=rJxHsjRqFQ
|
hyperbolic-attention-networks-1
| null |
[] |
https://paperswithcode.com/paper/entropy-and-mutual-information-in-models-of
|
1805.09785
| null | null |
Entropy and mutual information in models of deep neural networks
|
We examine a class of deep learning models with a tractable method to compute
information-theoretic quantities. Our contributions are three-fold: (i) We show
how entropies and mutual informations can be derived from heuristic statistical
physics methods, under the assumption that weight matrices are independent and
orthogonally-invariant. (ii) We extend particular cases in which this result is
known to be rigorously exact by providing a proof for two-layers networks with
Gaussian random weights, using the recently introduced adaptive interpolation
method. (iii) We propose an experiment framework with generative models of
synthetic datasets, on which we train deep neural networks with a weight
constraint designed so that the assumption in (i) is verified during learning.
We study the behavior of entropies and mutual informations throughout learning
and conclude that, in the proposed setting, the relationship between
compression and generalization remains elusive.
|
We examine a class of deep learning models with a tractable method to compute information-theoretic quantities.
|
http://arxiv.org/abs/1805.09785v2
|
http://arxiv.org/pdf/1805.09785v2.pdf
|
NeurIPS 2018 12
|
[
"Marylou Gabrié",
"Andre Manoel",
"Clément Luneau",
"Jean Barbier",
"Nicolas Macris",
"Florent Krzakala",
"Lenka Zdeborová"
] |
[] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7453-entropy-and-mutual-information-in-models-of-deep-neural-networks
|
http://papers.nips.cc/paper/7453-entropy-and-mutual-information-in-models-of-deep-neural-networks.pdf
|
entropy-and-mutual-information-in-models-of-1
| null |
[] |
https://paperswithcode.com/paper/efficient-inference-in-multi-task-cox-process
|
1805.09781
| null | null |
Efficient Inference in Multi-task Cox Process Models
|
We generalize the log Gaussian Cox process (LGCP) framework to model multiple
correlated point data jointly. The observations are treated as realizations of
multiple LGCPs, whose log intensities are given by linear combinations of
latent functions drawn from Gaussian process priors. The combination
coefficients are also drawn from Gaussian processes and can incorporate
additional dependencies. We derive closed-form expressions for the moments of
the intensity functions and develop an efficient variational inference
algorithm that is orders of magnitude faster than competing deterministic and
stochastic approximations of multivariate LGCP, coregionalization models, and
multi-task permanental processes. Our approach outperforms these benchmarks in
multiple problems, offering the current state of the art in modeling
multivariate point processes.
|
We generalize the log Gaussian Cox process (LGCP) framework to model multiple correlated point data jointly.
|
http://arxiv.org/abs/1805.09781v3
|
http://arxiv.org/pdf/1805.09781v3.pdf
| null |
[
"Virginia Aglietti",
"Theodoros Damoulas",
"Edwin Bonilla"
] |
[
"Gaussian Processes",
"Point Processes",
"Variational Inference"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/mining-procedures-from-technical-support
|
1805.09780
| null | null |
Mining Procedures from Technical Support Documents
|
Guided troubleshooting is an inherent task in the domain of technical support
services. When a customer experiences an issue with the functioning of a
technical service or a product, an expert user helps guide the customer through
a set of steps comprising a troubleshooting procedure. The objective is to
identify the source of the problem through a set of diagnostic steps and
observations, and arrive at a resolution. Procedures containing these set of
diagnostic steps and observations in response to different problems are common
artifacts in the body of technical support documentation. The ability to use
machine learning and linguistics to understand and leverage these procedures
for applications like intelligent chatbots or robotic process automation, is
crucial. Existing research on question answering or intelligent chatbots does
not look within procedures or deep-understand them. In this paper, we outline a
system for mining procedures from technical support documents. We create models
for solving important subproblems like extraction of procedures, identifying
decision points within procedures, identifying blocks of instructions
corresponding to these decision points and mapping instructions within a
decision block. We also release a dataset containing our manual annotations on
publicly available support documents, to promote further research on the
problem.
| null |
http://arxiv.org/abs/1805.09780v1
|
http://arxiv.org/pdf/1805.09780v1.pdf
| null |
[
"Abhirut Gupta",
"Abhay Khosla",
"Gautam Singh",
"Gargi Dasgupta"
] |
[
"Diagnostic",
"Question Answering"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hotflip-white-box-adversarial-examples-for
|
1712.06751
| null | null |
HotFlip: White-Box Adversarial Examples for Text Classification
|
We propose an efficient method to generate white-box adversarial examples to
trick a character-level neural classifier. We find that only a few
manipulations are needed to greatly decrease the accuracy. Our method relies on
an atomic flip operation, which swaps one token for another, based on the
gradients of the one-hot input vectors. Due to efficiency of our method, we can
perform adversarial training which makes the model more robust to attacks at
test time. With the use of a few semantics-preserving constraints, we
demonstrate that HotFlip can be adapted to attack a word-level classifier as
well.
|
We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier.
|
http://arxiv.org/abs/1712.06751v2
|
http://arxiv.org/pdf/1712.06751v2.pdf
|
ACL 2018 7
|
[
"Javid Ebrahimi",
"Anyi Rao",
"Daniel Lowd",
"Dejing Dou"
] |
[
"Classification",
"General Classification",
"text-classification",
"Text Classification"
] | 2017-12-19T00:00:00 |
https://aclanthology.org/P18-2006
|
https://aclanthology.org/P18-2006.pdf
|
hotflip-white-box-adversarial-examples-for-1
| null |
[] |
https://paperswithcode.com/paper/local-sgd-converges-fast-and-communicates
|
1805.09767
| null |
S1g2JnRcFX
|
Local SGD Converges Fast and Communicates Little
|
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speedup with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large network delays and bandwidth limits. To overcome this communication bottleneck recent works propose to reduce the communication frequency. An algorithm of this type is local SGD that runs SGD independently in parallel on different workers and averages the sequences only once in a while. This scheme shows promising results in practice, but eluded thorough theoretical analysis. We prove concise convergence rates for local SGD on convex problems and show that it converges at the same rate as mini-batch SGD in terms of number of evaluated gradients, that is, the scheme achieves linear speedup in the number of workers and mini-batch size. The number of communication rounds can be reduced up to a factor of T^{1/2}---where T denotes the number of total steps---compared to mini-batch SGD. This also holds for asynchronous implementations. Local SGD can also be used for large scale training of deep learning models. The results shown here aim serving as a guideline to further explore the theoretical and practical aspects of local SGD in these applications.
|
Local SGD can also be used for large scale training of deep learning models.
|
https://arxiv.org/abs/1805.09767v3
|
https://arxiv.org/pdf/1805.09767v3.pdf
|
ICLR 2019 5
|
[
"Sebastian U. Stich"
] |
[] | 2018-05-24T00:00:00 |
https://openreview.net/forum?id=S1g2JnRcFX
|
https://openreview.net/pdf?id=S1g2JnRcFX
|
local-sgd-converges-fast-and-communicates-1
| null |
[
{
"code_snippet_url": "",
"description": "**Local SGD** is a distributed training technique that runs [SGD](https://paperswithcode.com/method/sgd) independently in parallel on different workers and averages the sequences only once in a while.",
"full_name": "Local SGD",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Local SGD",
"source_title": "Local SGD Converges Fast and Communicates Little",
"source_url": "https://arxiv.org/abs/1805.09767v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/quantifying-uncertainty-in-discrete
|
1802.04742
| null | null |
Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning
|
Deep Learning (DL) methods have been transforming computer vision with
innovative adaptations to other domains including climate change. For DL to
pervade Science and Engineering (S&E) applications where risk management is a
core component, well-characterized uncertainty estimates must accompany
predictions. However, S&E observations and model-simulations often follow
heavily skewed distributions and are not well modeled with DL approaches, since
they usually optimize a Gaussian, or Euclidean, likelihood loss. Recent
developments in Bayesian Deep Learning (BDL), which attempts to capture
uncertainties from noisy observations, aleatoric, and from unknown model
parameters, epistemic, provide us a foundation. Here we present a
discrete-continuous BDL model with Gaussian and lognormal likelihoods for
uncertainty quantification (UQ). We demonstrate the approach by developing UQ
estimates on `DeepSD', a super-resolution based DL model for Statistical
Downscaling (SD) in climate applied to precipitation, which follows an
extremely skewed distribution. We find that the discrete-continuous models
outperform a basic Gaussian distribution in terms of predictive accuracy and
uncertainty calibration. Furthermore, we find that the lognormal distribution,
which can handle skewed distributions, produces quality uncertainty estimates
at the extremes. Such results may be important across S&E, as well as other
domains such as finance and economics, where extremes are often of significant
interest. Furthermore, to our knowledge, this is the first UQ model in SD where
both aleatoric and epistemic uncertainties are characterized.
|
Furthermore, we find that the lognormal distribution, which can handle skewed distributions, produces quality uncertainty estimates at the extremes.
|
http://arxiv.org/abs/1802.04742v2
|
http://arxiv.org/pdf/1802.04742v2.pdf
| null |
[
"Thomas Vandal",
"Evan Kodra",
"Jennifer Dy",
"Sangram Ganguly",
"Ramakrishna Nemani",
"Auroop R. Ganguly"
] |
[
"Management",
"Super-Resolution",
"Uncertainty Quantification"
] | 2018-02-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geographical-hidden-markov-tree-for-flood
|
1805.09757
| null | null |
Geographical Hidden Markov Tree for Flood Extent Mapping (With Proof Appendix)
|
Flood extent mapping plays a crucial role in disaster management and national
water forecasting. Unfortunately, traditional classification methods are often
hampered by the existence of noise, obstacles and heterogeneity in spectral
features as well as implicit anisotropic spatial dependency across class
labels. In this paper, we propose geographical hidden Markov tree, a
probabilistic graphical model that generalizes the common hidden Markov model
from a one dimensional sequence to a two dimensional map. Anisotropic spatial
dependency is incorporated in the hidden class layer with a reverse tree
structure. We also investigate computational algorithms for reverse tree
construction, model parameter learning and class inference. Extensive
evaluations on both synthetic and real world datasets show that proposed model
outperforms multiple baselines in flood mapping, and our algorithms are
scalable on large data sizes.
| null |
http://arxiv.org/abs/1805.09757v1
|
http://arxiv.org/pdf/1805.09757v1.pdf
| null |
[
"Miao Xie",
"Zhe Jiang",
"Arpan Man Sainju"
] |
[
"General Classification",
"Management"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/keep-it-unreal-bridging-the-realism-gap-for
|
1804.09113
| null | null |
Keep it Unreal: Bridging the Realism Gap for 2.5D Recognition with Geometry Priors Only
|
With the increasing availability of large databases of 3D CAD models,
depth-based recognition methods can be trained on an uncountable number of
synthetically rendered images. However, discrepancies with the real data
acquired from various depth sensors still noticeably impede progress. Previous
works adopted unsupervised approaches to generate more realistic depth data,
but they all require real scans for training, even if unlabeled. This still
represents a strong requirement, especially when considering
real-life/industrial settings where real training images are hard or impossible
to acquire, but texture-less 3D models are available. We thus propose a novel
approach leveraging only CAD models to bridge the realism gap. Purely trained
on synthetic data, playing against an extensive augmentation pipeline in an
unsupervised manner, our generative adversarial network learns to effectively
segment depth images and recover the clean synthetic-looking depth information
even from partial occlusions. As our solution is not only fully decoupled from
the real domains but also from the task-specific analytics, the pre-processed
scans can be handed to any kind and number of recognition methods also trained
on synthetic data. Through various experiments, we demonstrate how this
simplifies their training and consistently enhances their performance, with
results on par with the same methods trained on real data, and better than
usual approaches doing the reverse mapping.
| null |
http://arxiv.org/abs/1804.09113v2
|
http://arxiv.org/pdf/1804.09113v2.pdf
| null |
[
"Sergey Zakharov",
"Benjamin Planche",
"Ziyan Wu",
"Andreas Hutter",
"Harald Kosch",
"Slobodan Ilic"
] |
[
"Generative Adversarial Network"
] | 2018-04-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mobiface-a-novel-dataset-for-mobile-face
|
1805.09749
| null | null |
MobiFace: A Novel Dataset for Mobile Face Tracking in the Wild
|
Face tracking serves as the crucial initial step in mobile applications
trying to analyse target faces over time in mobile settings. However, this
problem has received little attention, mainly due to the scarcity of dedicated
face tracking benchmarks. In this work, we introduce MobiFace, the first
dataset for single face tracking in mobile situations. It consists of 80
unedited live-streaming mobile videos captured by 70 different smartphone users
in fully unconstrained environments. Over $95K$ bounding boxes are manually
labelled. The videos are carefully selected to cover typical smartphone usage.
The videos are also annotated with 14 attributes, including 6 newly proposed
attributes and 8 commonly seen in object tracking. 36 state-of-the-art
trackers, including facial landmark trackers, generic object trackers and
trackers that we have fine-tuned or improved, are evaluated. The results
suggest that mobile face tracking cannot be solved through existing approaches.
In addition, we show that fine-tuning on the MobiFace training data
significantly boosts the performance of deep learning-based trackers,
suggesting that MobiFace captures the unique characteristics of mobile face
tracking. Our goal is to offer the community a diverse dataset to enable the
design and evaluation of mobile face trackers. The dataset, annotations and the
evaluation server will be on \url{https://mobiface.github.io/}.
|
36 state-of-the-art trackers, including facial landmark trackers, generic object trackers and trackers that we have fine-tuned or improved, are evaluated.
|
http://arxiv.org/abs/1805.09749v2
|
http://arxiv.org/pdf/1805.09749v2.pdf
| null |
[
"Yiming Lin",
"Shiyang Cheng",
"Jie Shen",
"Maja Pantic"
] |
[
"Face Detection",
"Object Tracking",
"Visual Tracking"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/standing-wave-decomposition-gaussian-process
|
1803.03666
| null | null |
Standing Wave Decomposition Gaussian Process
|
We propose a Standing Wave Decomposition (SWD) approximation to Gaussian
Process regression (GP). GP involves a costly matrix inversion operation, which
limits applicability to large data analysis. For an input space that can be
approximated by a grid and when correlations among data are short-ranged, the
kernel matrix inversion can be replaced by analytic diagonalization using the
SWD. We show that this approach applies to uni- and multi-dimensional input
data, extends to include longer-range correlations, and the grid can be in a
latent space and used as inducing points. Through simulations, we show that our
approximate method applied to the squared exponential kernel outperforms
existing methods in predictive accuracy per unit time in the regime where data
are plentiful. Our SWD-GP is recommended for regression analyses where there is
a relatively large amount of data and/or there are constraints on computation
time.
|
We propose a Standing Wave Decomposition (SWD) approximation to Gaussian Process regression (GP).
|
http://arxiv.org/abs/1803.03666v4
|
http://arxiv.org/pdf/1803.03666v4.pdf
| null |
[
"Chi-Ken Lu",
"Scott Cheng-Hsin Yang",
"Patrick Shafto"
] |
[
"regression"
] | 2018-03-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-robust-evaluations-of-continual
|
1805.09733
| null | null |
Towards Robust Evaluations of Continual Learning
|
Experiments used in current continual learning research do not faithfully assess fundamental challenges of learning continually. Instead of assessing performance on challenging and representative experiment designs, recent research has focused on increased dataset difficulty, while still using flawed experiment set-ups. We examine standard evaluations and show why these evaluations make some continual learning approaches look better than they are. We introduce desiderata for continual learning evaluations and explain why their absence creates misleading comparisons. Based on our desiderata we then propose new experiment designs which we demonstrate with various continual learning approaches and datasets. Our analysis calls for a reprioritization of research effort by the community.
| null |
https://arxiv.org/abs/1805.09733v3
|
https://arxiv.org/pdf/1805.09733v3.pdf
| null |
[
"Sebastian Farquhar",
"Yarin Gal"
] |
[
"Continual Learning"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-convex-polytopes-with-margin
|
1805.09719
| null | null |
Learning convex polyhedra with margin
|
We present an improved algorithm for {\em quasi-properly} learning convex polyhedra in the realizable PAC setting from data with a margin. Our learning algorithm constructs a consistent polyhedron as an intersection of about $t \log t$ halfspaces with constant-size margins in time polynomial in $t$ (where $t$ is the number of halfspaces forming an optimal polyhedron). We also identify distinct generalizations of the notion of margin from hyperplanes to polyhedra and investigate how they relate geometrically; this result may have ramifications beyond the learning setting.
| null |
https://arxiv.org/abs/1805.09719v3
|
https://arxiv.org/pdf/1805.09719v3.pdf
|
NeurIPS 2018 12
|
[
"Lee-Ad Gottlieb",
"Eran Kaufman",
"Aryeh Kontorovich",
"Gabriel Nivasch"
] |
[] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7813-learning-convex-polytopes-with-margin
|
http://papers.nips.cc/paper/7813-learning-convex-polytopes-with-margin.pdf
|
learning-convex-polytopes-with-margin-1
| null |
[] |
https://paperswithcode.com/paper/autonomously-and-simultaneously-refining-deep-1
|
1805.09712
| null | null |
Autonomously and Simultaneously Refining Deep Neural Network Parameters by Generative Adversarial Networks
|
The choice of parameters, and the design of the network architecture are
important factors affecting the performance of deep neural networks. However,
there has not been much work on developing an established and systematic way of
building the structure and choosing the parameters of a neural network, and
this task heavily depends on trial and error and empirical results. Considering
that there are many design and parameter choices, such as the number of neurons
in each layer, the type of activation function, the choice of using drop out or
not, it is very hard to cover every configuration, and find the optimal
structure. In this paper, we propose a novel and systematic method that
autonomously and simultaneously optimizes multiple parameters of any given deep
neural network by using a generative adversarial network (GAN). In our proposed
approach, two different models compete and improve each other progressively
with a GAN-based strategy. Our proposed approach can be used to autonomously
refine the parameters, and improve the accuracy of different deep neural
network architectures. Without loss of generality, the proposed method has been
tested with three different neural network architectures, and three very
different datasets and applications. The results show that the presented
approach can simultaneously and successfully optimize multiple neural network
parameters, and achieve increased accuracy in all three scenarios.
| null |
http://arxiv.org/abs/1805.09712v1
|
http://arxiv.org/pdf/1805.09712v1.pdf
| null |
[
"Burak Kakillioglu",
"Yantao Lu",
"Senem Velipasalar"
] |
[
"Generative Adversarial Network"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/jointly-optimize-data-augmentation-and
|
1805.09707
| null | null |
Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation
|
Random data augmentation is a critical technique to avoid overfitting in
training deep neural network models. However, data augmentation and network
training are usually treated as two isolated processes, limiting the
effectiveness of network training. Why not jointly optimize the two? We propose
adversarial data augmentation to address this limitation. The main idea is to
design an augmentation network (generator) that competes against a target
network (discriminator) by generating `hard' augmentation operations online.
The augmentation network explores the weaknesses of the target network, while
the latter learns from `hard' augmentations to achieve better performance. We
also design a reward/penalty strategy for effective joint training. We
demonstrate our approach on the problem of human pose estimation and carry out
a comprehensive experimental analysis, showing that our method can
significantly improve state-of-the-art models without additional data efforts.
| null |
http://arxiv.org/abs/1805.09707v1
|
http://arxiv.org/pdf/1805.09707v1.pdf
|
CVPR 2018 6
|
[
"Xi Peng",
"Zhiqiang Tang",
"Fei Yang",
"Rogerio Feris",
"Dimitris Metaxas"
] |
[
"Data Augmentation",
"Pose Estimation"
] | 2018-05-24T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Peng_Jointly_Optimize_Data_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Peng_Jointly_Optimize_Data_CVPR_2018_paper.pdf
|
jointly-optimize-data-augmentation-and-1
| null |
[] |
https://paperswithcode.com/paper/r-vqa-learning-visual-relation-facts-with
|
1805.09701
| null | null |
R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering
|
Recently, Visual Question Answering (VQA) has emerged as one of the most
significant tasks in multimodal learning as it requires understanding both
visual and textual modalities. Existing methods mainly rely on extracting image
and question features to learn their joint feature embedding via multimodal
fusion or attention mechanism. Some recent studies utilize external
VQA-independent models to detect candidate entities or attributes in images,
which serve as semantic knowledge complementary to the VQA task. However, these
candidate entities or attributes might be unrelated to the VQA task and have
limited semantic capacities. To better utilize semantic knowledge in images, we
propose a novel framework to learn visual relation facts for VQA. Specifically,
we build up a Relation-VQA (R-VQA) dataset based on the Visual Genome dataset
via a semantic similarity module, in which each data consists of an image, a
corresponding question, a correct answer and a supporting relation fact. A
well-defined relation detector is then adopted to predict visual
question-related relation facts. We further propose a multi-step attention
model composed of visual attention and semantic attention sequentially to
extract related visual knowledge and semantic knowledge. We conduct
comprehensive experiments on the two benchmark datasets, demonstrating that our
model achieves state-of-the-art performance and verifying the benefit of
considering visual relation facts.
|
To better utilize semantic knowledge in images, we propose a novel framework to learn visual relation facts for VQA.
|
http://arxiv.org/abs/1805.09701v2
|
http://arxiv.org/pdf/1805.09701v2.pdf
| null |
[
"Pan Lu",
"Lei Ji",
"Wei zhang",
"Nan Duan",
"Ming Zhou",
"Jianyong Wang"
] |
[
"Question Answering",
"Relation",
"Semantic Similarity",
"Semantic Textual Similarity",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dictionary-learning-for-adaptive-gpr-target
|
1806.04599
| null | null |
Dictionary Learning for Adaptive GPR Landmine Classification
|
Ground penetrating radar (GPR) target detection and classification is a challenging task. Here, we consider online dictionary learning (DL) methods to obtain sparse representations (SR) of the GPR data to enhance feature extraction for target classification via support vector machines. Online methods are preferred because traditional batch DL like K-SVD is not scalable to high-dimensional training sets and infeasible for real-time operation. We also develop Drop-Off MINi-batch Online Dictionary Learning (DOMINODL) which exploits the fact that a lot of the training data may be correlated. The DOMINODL algorithm iteratively considers elements of the training set in small batches and drops off samples which become less relevant. For the case of abandoned anti-personnel landmines classification, we compare the performance of K-SVD with three online algorithms: classical Online Dictionary Learning, its correlation-based variant, and DOMINODL. Our experiments with real data from L-band GPR show that online DL methods reduce learning time by 36-93% and increase mine detection by 4-28% over K-SVD. Our DOMINODL is the fastest and retains similar classification performance as the other two online DL approaches. We use a Kolmogorov-Smirnoff test distance and the Dvoretzky-Kiefer-Wolfowitz inequality for the selection of DL input parameters leading to enhanced classification results. To further compare with state-of-the-art classification approaches, we evaluate a convolutional neural network (CNN) classifier which performs worse than the proposed approach. Moreover, when the acquired samples are randomly reduced by 25%, 50% and 75%, sparse decomposition based classification with DL remains robust while the CNN accuracy is drastically compromised.
| null |
https://arxiv.org/abs/1806.04599v2
|
https://arxiv.org/pdf/1806.04599v2.pdf
| null |
[
"Fabio Giovanneschi",
"Kumar Vijay Mishra",
"Maria Antonia Gonzalez-Huici",
"Yonina C. Eldar",
"Joachim H. G. Ender"
] |
[
"Classification",
"Dictionary Learning",
"General Classification",
"GPR",
"Landmine"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/frequentist-consistency-of-variational-bayes
|
1705.03439
| null | null |
Frequentist Consistency of Variational Bayes
|
A key challenge for modern Bayesian statistics is how to perform scalable inference of posterior distributions. To address this challenge, variational Bayes (VB) methods have emerged as a popular alternative to the classical Markov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while achieving comparable predictive performance. However, there are few theoretical results around VB. In this paper, we establish frequentist consistency and asymptotic normality of VB methods. Specifically, we connect VB methods to point estimates based on variational approximations, called frequentist variational approximations, and we use the connection to prove a variational Bernstein-von Mises theorem. The theorem leverages the theoretical characterizations of frequentist variational approximations to understand asymptotic properties of VB. In summary, we prove that (1) the VB posterior converges to the Kullback-Leibler (KL) minimizer of a normal distribution, centered at the truth and (2) the corresponding variational expectation of the parameter is consistent and asymptotically normal. As applications of the theorem, we derive asymptotic properties of VB posteriors in Bayesian mixture models, Bayesian generalized linear mixed models, and Bayesian stochastic block models. We conduct a simulation study to illustrate these theoretical results.
| null |
https://arxiv.org/abs/1705.03439v3
|
https://arxiv.org/pdf/1705.03439v3.pdf
| null |
[
"Yixin Wang",
"David M. Blei"
] |
[] | 2017-05-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-and-testing-causal-models-with
|
1805.09697
| null | null |
Learning and Testing Causal Models with Interventions
|
We consider testing and learning problems on causal Bayesian networks as
defined by Pearl (Pearl, 2009). Given a causal Bayesian network $\mathcal{M}$
on a graph with $n$ discrete variables and bounded in-degree and bounded
`confounded components', we show that $O(\log n)$ interventions on an unknown
causal Bayesian network $\mathcal{X}$ on the same graph, and
$\tilde{O}(n/\epsilon^2)$ samples per intervention, suffice to efficiently
distinguish whether $\mathcal{X}=\mathcal{M}$ or whether there exists some
intervention under which $\mathcal{X}$ and $\mathcal{M}$ are farther than
$\epsilon$ in total variation distance. We also obtain sample/time/intervention
efficient algorithms for: (i) testing the identity of two unknown causal
Bayesian networks on the same graph; and (ii) learning a causal Bayesian
network on a given graph. Although our algorithms are non-adaptive, we show
that adaptivity does not help in general: $\Omega(\log n)$ interventions are
necessary for testing the identity of two unknown causal Bayesian networks on
the same graph, even adaptively. Our algorithms are enabled by a new
subadditivity inequality for the squared Hellinger distance between two causal
Bayesian networks.
| null |
http://arxiv.org/abs/1805.09697v1
|
http://arxiv.org/pdf/1805.09697v1.pdf
|
NeurIPS 2018 12
|
[
"Jayadev Acharya",
"Arnab Bhattacharyya",
"Constantinos Daskalakis",
"Saravanan Kandasamy"
] |
[] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/8155-learning-and-testing-causal-models-with-interventions
|
http://papers.nips.cc/paper/8155-learning-and-testing-causal-models-with-interventions.pdf
|
learning-and-testing-causal-models-with-1
| null |
[] |
https://paperswithcode.com/paper/weakly-supervised-semantic-parsing-with-1
|
1711.05240
| null | null |
Weakly-supervised Semantic Parsing with Abstract Examples
|
Training semantic parsers from weak supervision (denotations) rather than
strong supervision (programs) complicates training in two ways. First, a large
search space of potential programs needs to be explored at training time to
find a correct program. Second, spurious programs that accidentally lead to a
correct denotation add noise to training. In this work we propose that in
closed worlds with clear semantic types, one can substantially alleviate these
problems by utilizing an abstract representation, where tokens in both the
language utterance and program are lifted to an abstract form. We show that
these abstractions can be defined with a handful of lexical rules and that they
result in sharing between different examples that alleviates the difficulties
in training. To test our approach, we develop the first semantic parser for
CNLVR, a challenging visual reasoning dataset, where the search space is large
and overcoming spuriousness is critical, because denotations are either TRUE or
FALSE, and thus random programs are likely to lead to a correct denotation. Our
method substantially improves performance, and reaches 82.5% accuracy, a 14.7%
absolute accuracy improvement compared to the best reported accuracy so far.
|
Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways.
|
http://arxiv.org/abs/1711.05240v5
|
http://arxiv.org/pdf/1711.05240v5.pdf
| null |
[
"Omer Goldman",
"Veronica Latcinnik",
"Udi Naveh",
"Amir Globerson",
"Jonathan Berant"
] |
[
"Semantic Parsing",
"Visual Reasoning"
] | 2017-11-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/been-there-done-that-meta-learning-with
|
1805.09692
| null | null |
Been There, Done That: Meta-Learning with Episodic Recall
|
Meta-learning agents excel at rapidly learning new tasks from open-ended task
distributions; yet, they forget what they learn about each task as soon as the
next begins. When tasks reoccur - as they do in natural environments -
metalearning agents must explore again instead of immediately exploiting
previously discovered solutions. We propose a formalism for generating
open-ended yet repetitious environments, then develop a meta-learning
architecture for solving these environments. This architecture melds the
standard LSTM working memory with a differentiable neural episodic memory. We
explore the capabilities of agents with this episodic LSTM in five
meta-learning environments with reoccurring tasks, ranging from bandits to
navigation and stochastic sequential decision problems.
|
Meta-learning agents excel at rapidly learning new tasks from open-ended task distributions; yet, they forget what they learn about each task as soon as the next begins.
|
http://arxiv.org/abs/1805.09692v2
|
http://arxiv.org/pdf/1805.09692v2.pdf
|
ICML 2018 7
|
[
"Samuel Ritter",
"Jane. X. Wang",
"Zeb Kurth-Nelson",
"Siddhant M. Jayakumar",
"Charles Blundell",
"Razvan Pascanu",
"Matthew Botvinick"
] |
[
"Meta-Learning"
] | 2018-05-24T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2252
|
http://proceedings.mlr.press/v80/ritter18a/ritter18a.pdf
|
been-there-done-that-meta-learning-with-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/on-improving-deep-reinforcement-learning-for
|
1704.07978
| null | null |
On Improving Deep Reinforcement Learning for POMDPs
|
Deep Reinforcement Learning (RL) recently emerged as one of the most
competitive approaches for learning in sequential decision making problems with
fully observable environments, e.g., computer Go. However, very little work has
been done in deep RL to handle partially observable environments. We propose a
new architecture called Action-specific Deep Recurrent Q-Network (ADRQN) to
enhance learning performance in partially observable domains. Actions are
encoded by a fully connected layer and coupled with a convolutional observation
to form an action-observation pair. The time series of action-observation pairs
are then integrated by an LSTM layer that learns latent states based on which a
fully connected layer computes Q-values as in conventional Deep Q-Networks
(DQNs). We demonstrate the effectiveness of our new architecture in several
partially observable domains, including flickering Atari games.
|
Deep Reinforcement Learning (RL) recently emerged as one of the most competitive approaches for learning in sequential decision making problems with fully observable environments, e. g., computer Go.
|
http://arxiv.org/abs/1704.07978v6
|
http://arxiv.org/pdf/1704.07978v6.pdf
| null |
[
"Pengfei Zhu",
"Xin Li",
"Pascal Poupart",
"Guanghui Miao"
] |
[
"Atari Games",
"Decision Making",
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Sequential Decision Making",
"Time Series",
"Time Series Analysis"
] | 2017-04-26T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-residual-networks-with-a-fully-connected
|
1805.10143
| null | null |
Deep Residual Networks with a Fully Connected Recon-struction Layer for Single Image Super-Resolution
|
Recently, deep neural networks have achieved impressive performance in terms of both reconstruction accuracy and efficiency for single image super-resolution (SISR). However, the network model of these methods is a fully convolutional neural network, which is limit to exploit the differentiated contextual information over the global region of the input image because of the weight sharing in convolution height and width extent. In this paper, we discuss a new SISR architecture where features are extracted in the low-resolution (LR) space, and then we use a fully connected layer which learns an array of differentiated upsampling weights to reconstruct the desired high-resolution (HR) image from the final obtained LR features. By doing so, we effectively exploit the differentiated contextual information over the whole input image region, whilst maintaining the low computational complexity for the overall SR operations. In addition, we introduce an edge difference constraint into our loss function to preserve edges and texture structures. Extensive experiments validate that our SISR method outperforms the existing state-of-the-art methods.
| null |
https://arxiv.org/abs/1805.10143v2
|
https://arxiv.org/pdf/1805.10143v2.pdf
| null |
[
"Yongliang Tang",
"Jiashui Huang",
"Faen Zhang",
"Weiguo Gong"
] |
[
"Image Super-Resolution",
"Super-Resolution"
] | 2018-05-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-unified-knowledge-representation-and
|
1805.04007
| null | null |
A Unified Knowledge Representation and Context-aware Recommender System in Internet of Things
|
Within the rapidly developing Internet of Things (IoT), numerous and diverse
physical devices, Edge devices, Cloud infrastructure, and their quality of
service requirements (QoS), need to be represented within a unified
specification in order to enable rapid IoT application development, monitoring,
and dynamic reconfiguration. But heterogeneities among different configuration
knowledge representation models pose limitations for acquisition, discovery and
curation of configuration knowledge for coordinated IoT applications. This
paper proposes a unified data model to represent IoT resource configuration
knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN
systEm) to facilitate incremental knowledge acquisition and declarative context
driven knowledge recommendation.
| null |
http://arxiv.org/abs/1805.04007v2
|
http://arxiv.org/pdf/1805.04007v2.pdf
| null |
[
"Yinhao Li",
"Awa Alqahtani",
"Ellis Solaiman",
"Charith Perera",
"Prem Prakash Jayaraman",
"Boualem Benatallah",
"Rajiv Ranjan"
] |
[
"Recommendation Systems"
] | 2018-05-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/forming-ideas-interactive-data-exploration
|
1805.09676
| null | null |
Forming IDEAS Interactive Data Exploration & Analysis System
|
Modern cyber security operations collect an enormous amount of logging and
alerting data. While analysts have the ability to query and compute simple
statistics and plots from their data, current analytical tools are too simple
to admit deep understanding. To detect advanced and novel attacks, analysts
turn to manual investigations. While commonplace, current investigations are
time-consuming, intuition-based, and proving insufficient. Our hypothesis is
that arming the analyst with easy-to-use data science tools will increase their
work efficiency, provide them with the ability to resolve hypotheses with
scientific inquiry of their data, and support their decisions with evidence
over intuition. To this end, we present our work to build IDEAS (Interactive
Data Exploration and Analysis System). We present three real-world use-cases
that drive the system design from the algorithmic capabilities to the user
interface. Finally, a modular and scalable software architecture is discussed
along with plans for our pilot deployment with a security operation command.
| null |
http://arxiv.org/abs/1805.09676v2
|
http://arxiv.org/pdf/1805.09676v2.pdf
| null |
[
"Robert A. Bridges",
"Maria A. Vincent",
"Kelly M. T. Huffer",
"John R. Goodall",
"Jessie D. Jamieson",
"Zachary Burch"
] |
[] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/classical-structured-prediction-losses-for
|
1711.04956
| null | null |
Classical Structured Prediction Losses for Sequence to Sequence Learning
|
There has been much recent work on training neural attention models at the
sequence-level using either reinforcement learning-style methods or by
optimizing the beam. In this paper, we survey a range of classical objective
functions that have been widely used to train linear models for structured
prediction and apply them to neural sequence to sequence models. Our
experiments show that these losses can perform surprisingly well by slightly
outperforming beam search optimization in a like for like setup. We also report
new state of the art results on both IWSLT'14 German-English translation as
well as Gigaword abstractive summarization. On the larger WMT'14 English-French
translation task, sequence-level training achieves 41.5 BLEU which is on par
with the state of the art.
|
There has been much recent work on training neural attention models at the sequence-level using either reinforcement learning-style methods or by optimizing the beam.
|
http://arxiv.org/abs/1711.04956v5
|
http://arxiv.org/pdf/1711.04956v5.pdf
|
NAACL 2018 6
|
[
"Sergey Edunov",
"Myle Ott",
"Michael Auli",
"David Grangier",
"Marc'Aurelio Ranzato"
] |
[
"Abstractive Text Summarization",
"Machine Translation",
"Prediction",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Structured Prediction",
"Translation"
] | 2017-11-14T00:00:00 |
https://aclanthology.org/N18-1033
|
https://aclanthology.org/N18-1033.pdf
|
classical-structured-prediction-losses-for-1
| null |
[] |
https://paperswithcode.com/paper/lf-net-learning-local-features-from-images
|
1805.09662
| null | null |
LF-Net: Learning Local Features from Images
|
We present a novel deep architecture and a training strategy to learn a local
feature pipeline from scratch, using collections of images without the need for
human supervision. To do so we exploit depth and relative camera pose cues to
create a virtual target that the network should achieve on one image, provided
the outputs of the network for the other image. While this process is
inherently non-differentiable, we show that we can optimize the network in a
two-branch setup by confining it to one branch, while preserving
differentiability in the other. We train our method on both indoor and outdoor
datasets, with depth data from 3D sensors for the former, and depth estimates
from an off-the-shelf Structure-from-Motion solution for the latter. Our models
outperform the state of the art on sparse feature matching on both datasets,
while running at 60+ fps for QVGA images.
|
We present a novel deep architecture and a training strategy to learn a local feature pipeline from scratch, using collections of images without the need for human supervision.
|
http://arxiv.org/abs/1805.09662v2
|
http://arxiv.org/pdf/1805.09662v2.pdf
|
NeurIPS 2018 12
|
[
"Yuki Ono",
"Eduard Trulls",
"Pascal Fua",
"Kwang Moo Yi"
] |
[] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7861-lf-net-learning-local-features-from-images
|
http://papers.nips.cc/paper/7861-lf-net-learning-local-features-from-images.pdf
|
lf-net-learning-local-features-from-images-1
| null |
[] |
https://paperswithcode.com/paper/learning-a-single-convolutional-super
|
1712.06116
| null | null |
Learning a Single Convolutional Super-Resolution Network for Multiple Degradations
|
Recent years have witnessed the unprecedented success of deep convolutional
neural networks (CNNs) in single image super-resolution (SISR). However,
existing CNN-based SISR methods mostly assume that a low-resolution (LR) image
is bicubicly downsampled from a high-resolution (HR) image, thus inevitably
giving rise to poor performance when the true degradation does not follow this
assumption. Moreover, they lack scalability in learning a single model to
non-blindly deal with multiple degradations. To address these issues, we
propose a general framework with dimensionality stretching strategy that
enables a single convolutional super-resolution network to take two key factors
of the SISR degradation process, i.e., blur kernel and noise level, as input.
Consequently, the super-resolver can handle multiple and even spatially variant
degradations, which significantly improves the practicability. Extensive
experimental results on synthetic and real LR images show that the proposed
convolutional super-resolution network not only can produce favorable results
on multiple degradations but also is computationally efficient, providing a
highly effective and scalable solution to practical SISR applications.
|
Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR).
|
http://arxiv.org/abs/1712.06116v2
|
http://arxiv.org/pdf/1712.06116v2.pdf
|
CVPR 2018 6
|
[
"Kai Zhang",
"WangMeng Zuo",
"Lei Zhang"
] |
[
"Image Super-Resolution",
"Super-Resolution",
"Video Super-Resolution"
] | 2017-12-17T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Zhang_Learning_a_Single_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Zhang_Learning_a_Single_CVPR_2018_paper.pdf
|
learning-a-single-convolutional-super-1
| null |
[] |
https://paperswithcode.com/paper/returnn-as-a-generic-flexible-neural-toolkit
|
1805.05225
| null | null |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition
|
We compare the fast training and decoding speed of RETURNN of attention
models for translation, due to fast CUDA LSTM kernels, and a fast pure
TensorFlow beam search decoder. We show that a layer-wise pretraining scheme
for recurrent attention models gives over 1% BLEU improvement absolute and it
allows to train deeper recurrent encoder networks. Promising preliminary
results on max. expected BLEU training are presented. We are able to train
state-of-the-art models for translation and end-to-end models for speech
recognition and show results on WMT 2017 and Switchboard. The flexibility of
RETURNN allows a fast research feedback loop to experiment with alternative
architectures, and its generality allows to use it on a wide range of
applications.
|
We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder.
|
http://arxiv.org/abs/1805.05225v2
|
http://arxiv.org/pdf/1805.05225v2.pdf
|
ACL 2018 7
|
[
"Albert Zeyer",
"Tamer Alkhouli",
"Hermann Ney"
] |
[
"Decoder",
"speech-recognition",
"Speech Recognition",
"Translation"
] | 2018-05-14T00:00:00 |
https://aclanthology.org/P18-4022
|
https://aclanthology.org/P18-4022.pdf
|
returnn-as-a-generic-flexible-neural-toolkit-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/supervised-community-detection-with-line
|
1705.08415
| null |
H1g0Z3A9Fm
|
Supervised Community Detection with Line Graph Neural Networks
|
Traditionally, community detection in graphs can be solved using spectral methods or posterior inference under probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio. By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective. We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting. We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models, which is believed to reach the computational threshold. In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies. Our models also achieve good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of training linear GNNs for community detection problems, demonstrating that under certain simplifications and assumptions, the loss values at local and global minima are not far apart.
|
We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models, which is believed to reach the computational threshold.
|
https://arxiv.org/abs/1705.08415v6
|
https://arxiv.org/pdf/1705.08415v6.pdf
|
ICLR 2019 5
|
[
"Zhengdao Chen",
"Xiang Li",
"Joan Bruna"
] |
[
"Community Detection",
"Graph Classification",
"Stochastic Block Model"
] | 2017-05-23T00:00:00 |
https://openreview.net/forum?id=H1g0Z3A9Fm
|
https://openreview.net/pdf?id=H1g0Z3A9Fm
|
supervised-community-detection-with-line-1
| null |
[] |
https://paperswithcode.com/paper/uncertainty-aware-attention-for-reliable
|
1805.09653
| null | null |
Uncertainty-Aware Attention for Reliable Interpretation and Prediction
|
Attention mechanism is effective in both focusing the deep learning models on
relevant features and interpreting them. However, attentions may be unreliable
since the networks that generate them are often trained in a weakly-supervised
manner. To overcome this limitation, we introduce the notion of input-dependent
uncertainty to the attention mechanism, such that it generates attention for
each feature with varying degrees of noise based on the given input, to learn
larger variance on instances it is uncertain about. We learn this
Uncertainty-aware Attention (UA) mechanism using variational inference, and
validate it on various risk prediction tasks from electronic health records on
which our model significantly outperforms existing attention models. The
analysis of the learned attentions shows that our model generates attentions
that comply with clinicians' interpretation, and provide richer interpretation
via learned variance. Further evaluation of both the accuracy of the
uncertainty calibration and the prediction performance with "I don't know"
decision show that UA yields networks with high reliability as well.
|
Attention mechanism is effective in both focusing the deep learning models on relevant features and interpreting them.
|
http://arxiv.org/abs/1805.09653v1
|
http://arxiv.org/pdf/1805.09653v1.pdf
|
NeurIPS 2018 12
|
[
"Jay Heo",
"Hae Beom Lee",
"Saehoon Kim",
"Juho Lee",
"Kwang Joon Kim",
"Eunho Yang",
"Sung Ju Hwang"
] |
[
"Prediction",
"Variational Inference"
] | 2018-05-24T00:00:00 |
http://papers.nips.cc/paper/7370-uncertainty-aware-attention-for-reliable-interpretation-and-prediction
|
http://papers.nips.cc/paper/7370-uncertainty-aware-attention-for-reliable-interpretation-and-prediction.pdf
|
uncertainty-aware-attention-for-reliable-1
| null |
[] |
https://paperswithcode.com/paper/filtering-and-mining-parallel-data-in-a-joint
|
1805.09822
| null | null |
Filtering and Mining Parallel Data in a Joint Multilingual Space
|
We learn a joint multilingual sentence embedding and use the distance between
sentences in different languages to filter noisy parallel data and to mine for
parallel data in large news collections. We are able to improve a competitive
baseline on the WMT'14 English to German task by 0.3 BLEU by filtering out 25%
of the training data. The same approach is used to mine additional bitexts for
the WMT'14 system and to obtain competitive results on the BUCC shared task to
identify parallel sentences in comparable corpora. The approach is generic, it
can be applied to many language pairs and it is independent of the architecture
of the machine translation system.
| null |
http://arxiv.org/abs/1805.09822v1
|
http://arxiv.org/pdf/1805.09822v1.pdf
|
ACL 2018 7
|
[
"Holger Schwenk"
] |
[
"Machine Translation",
"Sentence",
"Sentence Embedding",
"Sentence-Embedding",
"Translation"
] | 2018-05-24T00:00:00 |
https://aclanthology.org/P18-2037
|
https://aclanthology.org/P18-2037.pdf
|
filtering-and-mining-parallel-data-in-a-joint-1
| null |
[] |
https://paperswithcode.com/paper/nonlinear-acceleration-of-deep-neural
|
1805.09639
| null | null |
Online Regularized Nonlinear Acceleration
|
Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method. It can be seen as a regularized version of Anderson acceleration, a classical acceleration scheme from numerical analysis. The new scheme provably improves the rate of convergence of fixed step gradient descent, and its empirical performance is comparable to that of quasi-Newton methods. However, RNA cannot accelerate faster multistep algorithms like Nesterov's method and often diverges in this context. Here, we adapt RNA to overcome these issues, so that our scheme can be used on fast algorithms such as gradient methods with momentum. We show optimal complexity bounds for quadratics and asymptotically optimal rates on general convex minimization problems. Moreover, this new scheme works online, i.e., extrapolated solution estimates can be reinjected at each iteration, significantly improving numerical performance over classical accelerated methods.
| null |
https://arxiv.org/abs/1805.09639v2
|
https://arxiv.org/pdf/1805.09639v2.pdf
| null |
[
"Damien Scieur",
"Edouard Oyallon",
"Alexandre d'Aspremont",
"Francis Bach"
] |
[
"General Classification"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/resource-allocation-for-a-wireless
|
1806.04702
| null | null |
Resource Allocation for a Wireless Coexistence Management System Based on Reinforcement Learning
|
In industrial environments, an increasing amount of wireless devices are
used, which utilize license-free bands. As a consequence of these mutual
interferences of wireless systems might decrease the state of coexistence.
Therefore, a central coexistence management system is needed, which allocates
conflict-free resources to wireless systems. To ensure a conflict-free resource
utilization, it is useful to predict the prospective medium utilization before
resources are allocated. This paper presents a self-learning concept, which is
based on reinforcement learning. A simulative evaluation of reinforcement
learning agents based on neural networks, called deep Q-networks and double
deep Q-networks, was realized for exemplary and practically relevant
coexistence scenarios. The evaluation of the double deep Q-network showed that
a prediction accuracy of at least 98 % can be reached in all investigated
scenarios.
| null |
http://arxiv.org/abs/1806.04702v1
|
http://arxiv.org/pdf/1806.04702v1.pdf
| null |
[
"Philip Soeffker",
"Dimitri Block",
"Nico Wiebusch",
"Uwe Meier"
] |
[
"Management",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Self-Learning"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.