paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/multi-label-transfer-learning-for-semantic
|
1805.12501
| null | null |
Multi-Label Transfer Learning for Multi-Relational Semantic Similarity
|
Multi-relational semantic similarity datasets define the semantic relations
between two short texts in multiple ways, e.g., similarity, relatedness, and so
on. Yet, all the systems to date designed to capture such relations target one
relation at a time. We propose a multi-label transfer learning approach based
on LSTM to make predictions for several relations simultaneously and aggregate
the losses to update the parameters. This multi-label regression approach
jointly learns the information provided by the multiple relations, rather than
treating them as separate tasks. Not only does this approach outperform the
single-task approach and the traditional multi-task learning approach, but it
also achieves state-of-the-art performance on all but one relation of the Human
Activity Phrase dataset.
| null |
http://arxiv.org/abs/1805.12501v2
|
http://arxiv.org/pdf/1805.12501v2.pdf
|
SEMEVAL 2019 6
|
[
"Li Zhang",
"Steven R. Wilson",
"Rada Mihalcea"
] |
[
"Multi-Task Learning",
"regression",
"Relation",
"Semantic Similarity",
"Semantic Textual Similarity",
"Transfer Learning"
] | 2018-05-31T00:00:00 |
https://aclanthology.org/S19-1005
|
https://aclanthology.org/S19-1005.pdf
|
multi-label-transfer-learning-for-multi
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/sequential-attacks-on-agents-for-long-term
|
1805.12487
| null | null |
Sequential Attacks on Agents for Long-Term Adversarial Goals
|
Reinforcement learning (RL) has advanced greatly in the past few years with
the employment of effective deep neural networks (DNNs) on the policy networks.
With the great effectiveness came serious vulnerability issues with DNNs that
small adversarial perturbations on the input can change the output of the
network. Several works have pointed out that learned agents with a DNN policy
network can be manipulated against achieving the original task through a
sequence of small perturbations on the input states. In this paper, we
demonstrate furthermore that it is also possible to impose an arbitrary
adversarial reward on the victim policy network through a sequence of attacks.
Our method involves the latest adversarial attack technique, Adversarial
Transformer Network (ATN), that learns to generate the attack and is easy to
integrate into the policy network. As a result of our attack, the victim agent
is misguided to optimise for the adversarial reward over time. Our results
expose serious security threats for RL applications in safety-critical systems
including drones, medical analysis, and self-driving cars.
| null |
http://arxiv.org/abs/1805.12487v2
|
http://arxiv.org/pdf/1805.12487v2.pdf
| null |
[
"Edgar Tretschk",
"Seong Joon Oh",
"Mario Fritz"
] |
[
"Adversarial Attack",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Self-Driving Cars"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-layered-gradient-boosting-decision
|
1806.00007
| null | null |
Multi-Layered Gradient Boosting Decision Trees
|
Multi-layered representation is believed to be the key ingredient of deep
neural networks especially in cognitive tasks like computer vision. While
non-differentiable models such as gradient boosting decision trees (GBDTs) are
the dominant methods for modeling discrete or tabular data, they are hard to
incorporate with such representation learning ability. In this work, we propose
the multi-layered GBDT forest (mGBDTs), with an explicit emphasis on exploring
the ability to learn hierarchical representations by stacking several layers of
regression GBDTs as its building block. The model can be jointly trained by a
variant of target propagation across layers, without the need to derive
back-propagation nor differentiability. Experiments and visualizations
confirmed the effectiveness of the model in terms of performance and
representation learning ability.
|
Multi-layered representation is believed to be the key ingredient of deep neural networks especially in cognitive tasks like computer vision.
|
http://arxiv.org/abs/1806.00007v1
|
http://arxiv.org/pdf/1806.00007v1.pdf
|
NeurIPS 2018 12
|
[
"Ji Feng",
"Yang Yu",
"Zhi-Hua Zhou"
] |
[
"Representation Learning"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7614-multi-layered-gradient-boosting-decision-trees
|
http://papers.nips.cc/paper/7614-multi-layered-gradient-boosting-decision-trees.pdf
|
multi-layered-gradient-boosting-decision-1
| null |
[] |
https://paperswithcode.com/paper/propagating-confidences-through-cnns-for
|
1805.11913
| null | null |
Propagating Confidences through CNNs for Sparse Data Regression
|
In most computer vision applications, convolutional neural networks (CNNs)
operate on dense image data generated by ordinary cameras. Designing CNNs for
sparse and irregularly spaced input data is still an open problem with numerous
applications in autonomous driving, robotics, and surveillance. To tackle this
challenging problem, we introduce an algebraically-constrained convolution
layer for CNNs with sparse input and demonstrate its capabilities for the scene
depth completion task. We propose novel strategies for determining the
confidence from the convolution operation and propagating it to consecutive
layers. Furthermore, we propose an objective function that simultaneously
minimizes the data error while maximizing the output confidence. Comprehensive
experiments are performed on the KITTI depth benchmark and the results clearly
demonstrate that the proposed approach achieves superior performance while
requiring three times fewer parameters than the state-of-the-art methods.
Moreover, our approach produces a continuous pixel-wise confidence map enabling
information fusion, state inference, and decision support.
|
To tackle this challenging problem, we introduce an algebraically-constrained convolution layer for CNNs with sparse input and demonstrate its capabilities for the scene depth completion task.
|
http://arxiv.org/abs/1805.11913v3
|
http://arxiv.org/pdf/1805.11913v3.pdf
| null |
[
"Abdelrahman Eldesokey",
"Michael Felsberg",
"Fahad Shahbaz Khan"
] |
[
"Autonomous Driving",
"Depth Completion",
"regression"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/distributed-estimation-of-gaussian
|
1805.12472
| null | null |
Distributed Estimation of Gaussian Correlations
|
We study a distributed estimation problem in which two remotely located
parties, Alice and Bob, observe an unlimited number of i.i.d. samples
corresponding to two different parts of a random vector. Alice can send $k$
bits on average to Bob, who in turn wants to estimate the cross-correlation
matrix between the two parts of the vector. In the case where the parties
observe jointly Gaussian scalar random variables with an unknown correlation
$\rho$, we obtain two constructive and simple unbiased estimators attaining a
variance of $(1-\rho^2)/(2k\ln 2)$, which coincides with a known but
non-constructive random coding result of Zhang and Berger. We extend our
approach to the vector Gaussian case, which has not been treated before, and
construct an estimator that is uniformly better than the scalar estimator
applied separately to each of the correlations. We then show that the Gaussian
performance can essentially be attained even when the distribution is
completely unknown. This in particular implies that in the general problem of
distributed correlation estimation, the variance can decay at least as $O(1/k)$
with the number of transmitted bits. This behavior, however, is not tight: we
give an example of a rich family of distributions for which local samples
reveal essentially nothing about the correlations, and where a slightly
modified estimator attains a variance of $2^{-\Omega(k)}$.
| null |
http://arxiv.org/abs/1805.12472v2
|
http://arxiv.org/pdf/1805.12472v2.pdf
| null |
[
"Uri Hadar",
"Ofer Shayevitz"
] |
[
"2k"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-network-acceptability-judgments
|
1805.12471
| null | null |
Neural Network Acceptability Judgments
|
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.'s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.
|
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence.
|
https://arxiv.org/abs/1805.12471v3
|
https://arxiv.org/pdf/1805.12471v3.pdf
|
TACL 2019 3
|
[
"Alex Warstadt",
"Amanpreet Singh",
"Samuel R. Bowman"
] |
[
"CoLA",
"General Classification",
"Language Acquisition",
"Linguistic Acceptability",
"Sentence"
] | 2018-05-31T00:00:00 |
https://aclanthology.org/Q19-1040
|
https://aclanthology.org/Q19-1040.pdf
|
neural-network-acceptability-judgments-1
| null |
[] |
https://paperswithcode.com/paper/a-method-based-on-convex-cone-model-for-image
|
1805.12467
| null | null |
A Method Based on Convex Cone Model for Image-Set Classification with CNN Features
|
In this paper, we propose a method for image-set classification based on
convex cone models, focusing on the effectiveness of convolutional neural
network (CNN) features as inputs. CNN features have non-negative values when
using the rectified linear unit as an activation function. This naturally leads
us to model a set of CNN features by a convex cone and measure the geometric
similarity of convex cones for classification. To establish this framework, we
sequentially define multiple angles between two convex cones by repeating the
alternating least squares method and then define the geometric similarity
between the cones using the obtained angles. Moreover, to enhance our method,
we introduce a discriminant space, maximizing the between-class variance (gaps)
and minimizes the within-class variance of the projected convex cones onto the
discriminant space, similar to a Fisher discriminant analysis. Finally,
classification is based on the similarity between projected convex cones. The
effectiveness of the proposed method was demonstrated experimentally using a
private, multi-view hand shape dataset and two public databases.
| null |
http://arxiv.org/abs/1805.12467v1
|
http://arxiv.org/pdf/1805.12467v1.pdf
| null |
[
"Naoya Sogi",
"Taku Nakayama",
"Kazuhiro Fukui"
] |
[
"Classification",
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-gans-and-gmms
|
1805.12462
| null | null |
On GANs and GMMs
|
A longstanding problem in machine learning is to find unsupervised methods
that can learn the statistical structure of high dimensional signals. In recent
years, GANs have gained much attention as a possible solution to the problem,
and in particular have shown the ability to generate remarkably realistic high
resolution sampled images. At the same time, many authors have pointed out that
GANs may fail to model the full distribution ("mode collapse") and that using
the learned models for anything other than generating samples may be very
difficult. In this paper, we examine the utility of GANs in learning
statistical models of images by comparing them to perhaps the simplest
statistical model, the Gaussian Mixture Model. First, we present a simple
method to evaluate generative models based on relative proportions of samples
that fall into predetermined bins. Unlike previous automatic methods for
evaluating models, our method does not rely on an additional neural network nor
does it require approximating intractable computations. Second, we compare the
performance of GANs to GMMs trained on the same datasets. While GMMs have
previously been shown to be successful in modeling small patches of images, we
show how to train them on full sized images despite the high dimensionality.
Our results show that GMMs can generate realistic samples (although less sharp
than those of GANs) but also capture the full distribution, which GANs fail to
do. Furthermore, GMMs allow efficient inference and explicit representation of
the underlying statistical structure. Finally, we discuss how GMMs can be used
to generate sharp images.
|
While GMMs have previously been shown to be successful in modeling small patches of images, we show how to train them on full sized images despite the high dimensionality.
|
http://arxiv.org/abs/1805.12462v2
|
http://arxiv.org/pdf/1805.12462v2.pdf
|
NeurIPS 2018 12
|
[
"Eitan Richardson",
"Yair Weiss"
] |
[
"Image Generation"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7826-on-gans-and-gmms
|
http://papers.nips.cc/paper/7826-on-gans-and-gmms.pdf
|
on-gans-and-gmms-1
| null |
[] |
https://paperswithcode.com/paper/policy-search-in-continuous-action-domains-an
|
1803.04706
| null | null |
Policy Search in Continuous Action Domains: an Overview
|
Continuous action policy search is currently the focus of intensive research, driven both by the recent success of deep reinforcement learning algorithms and the emergence of competitors based on evolutionary algorithms. In this paper, we present a broad survey of policy search methods, providing a unified perspective on very different approaches, including also Bayesian Optimization and directed exploration methods. The main message of this overview is in the relationship between the families of methods, but we also outline some factors underlying sample efficiency properties of the various approaches.
| null |
https://arxiv.org/abs/1803.04706v5
|
https://arxiv.org/pdf/1803.04706v5.pdf
| null |
[
"Olivier Sigaud",
"Freek Stulp"
] |
[
"Bayesian Optimization",
"Deep Reinforcement Learning",
"Evolutionary Algorithms",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-a-new-system-for-drowsiness-detection
|
1806.00360
| null | null |
Towards a new system for drowsiness detection based on eye blinking and head posture estimation
|
Driver drowsiness problem is considered as one of the most important reasons
that increases road accidents number. We propose in this paper a new approach
for realtime driver drowsiness in order to prevent road accidents. The system
uses a smart video camera that takes drivers faces images and supervises the
eye blink (open and close) state and head posture to detect the different
drowsiness states. Face and eye detection are done by Viola and Jones
technique.
| null |
http://arxiv.org/abs/1806.00360v1
|
http://arxiv.org/pdf/1806.00360v1.pdf
| null |
[
"M. Ben Dkhil",
"A. Wali",
"Adel M. ALIMI"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/chinese-ner-using-lattice-lstm
|
1805.02023
| null | null |
Chinese NER Using Lattice LSTM
|
We investigate a lattice-structured LSTM model for Chinese NER, which encodes
a sequence of input characters as well as all potential words that match a
lexicon. Compared with character-based methods, our model explicitly leverages
word and word sequence information. Compared with word-based methods, lattice
LSTM does not suffer from segmentation errors. Gated recurrent cells allow our
model to choose the most relevant characters and words from a sentence for
better NER results. Experiments on various datasets show that lattice LSTM
outperforms both word-based and character-based LSTM baselines, achieving the
best results.
|
We investigate a lattice-structured LSTM model for Chinese NER, which encodes a sequence of input characters as well as all potential words that match a lexicon.
|
http://arxiv.org/abs/1805.02023v4
|
http://arxiv.org/pdf/1805.02023v4.pdf
|
ACL 2018 7
|
[
"Yue Zhang",
"Jie Yang"
] |
[
"Chinese Named Entity Recognition",
"NER",
"Sentence"
] | 2018-05-05T00:00:00 |
https://aclanthology.org/P18-1144
|
https://aclanthology.org/P18-1144.pdf
|
chinese-ner-using-lattice-lstm-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/never-look-back-a-modified-enkf-method-and
|
1805.08034
| null | null |
Never look back - A modified EnKF method and its application to the training of neural networks without back propagation
|
In this work, we present a new derivative-free optimization method and
investigate its use for training neural networks. Our method is motivated by
the Ensemble Kalman Filter (EnKF), which has been used successfully for solving
optimization problems that involve large-scale, highly nonlinear dynamical
systems. A key benefit of the EnKF method is that it requires only the
evaluation of the forward propagation but not its derivatives. Hence, in the
context of neural networks, it alleviates the need for back propagation and
reduces the memory consumption dramatically. However, the method is not a pure
"black-box" global optimization heuristic as it efficiently utilizes the
structure of typical learning problems. Promising first results of the EnKF for
training deep neural networks have been presented recently by Kovachki and
Stuart. We propose an important modification of the EnKF that enables us to
prove convergence of our method to the minimizer of a strongly convex function.
Our method also bears similarity with implicit filtering and we demonstrate its
potential for minimizing highly oscillatory functions using a simple example.
Further, we provide numerical examples that demonstrate the potential of our
method for training deep neural networks.
| null |
http://arxiv.org/abs/1805.08034v2
|
http://arxiv.org/pdf/1805.08034v2.pdf
| null |
[
"Eldad Haber",
"Felix Lucka",
"Lars Ruthotto"
] |
[
"global-optimization"
] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/research-on-the-brain-inspired-cross-modal
|
1805.01385
| null | null |
Research on the Brain-inspired Cross-modal Neural Cognitive Computing Framework
|
To address modeling problems of brain-inspired intelligence, this thesis is
focused on researching in the semantic-oriented framework design for multimedia
and multimodal information. The Multimedia Neural Cognitive Computing (MNCC)
model was designed based on the nervous mechanism and cognitive architecture.
Furthermore, the semantic-oriented hierarchical Cross-modal Neural Cognitive
Computing (CNCC) framework was proposed based on MNCC model, and formal
description and analysis for CNCC framework was given. It would effectively
improve the performance of semantic processing for multimedia and cross-modal
information, and has far-reaching significance for exploration and realization
brain-inspired computing.
| null |
http://arxiv.org/abs/1805.01385v2
|
http://arxiv.org/pdf/1805.01385v2.pdf
| null |
[
"Yang Liu"
] |
[] | 2018-05-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/new-feature-detection-mechanism-for-extended
|
1805.12443
| null | null |
New Feature Detection Mechanism for Extended Kalman Filter Based Monocular SLAM with 1-Point RANSAC
|
We present a different approach of feature point detection for improving the
accuracy of SLAM using single, monocular camera. Traditionally, Harris Corner
detection, SURF or FAST corner detectors are used for finding feature points of
interest in the image. We replace this with another approach, which involves
building a non-linear scale-space representation of images using Perona and
Malik Diffusion equation and computing the scale normalized Hessian at multiple
scale levels (KAZE feature). The feature points so detected are used to
estimate the state and pose of a mono camera using extended Kalman filter. By
using accelerated KAZE features and a more rigorous feature rejection routine
combined with 1-point RANSAC for outlier rejection, short baseline matching of
features are significantly improved, even with a lesser number of feature
points, especially in the presence of motion blur. We present a comparative
study of our proposal with FAST and show improved localization accuracy in
terms of absolute trajectory error.
| null |
http://arxiv.org/abs/1805.12443v1
|
http://arxiv.org/pdf/1805.12443v1.pdf
| null |
[
"Agniva Sengupta",
"Shafeeq Elanattil"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exact-mean-computation-in-dynamic-time
|
1710.08937
| null | null |
Exact Mean Computation in Dynamic Time Warping Spaces
|
Dynamic time warping constitutes a major tool for analyzing time series. In
particular, computing a mean series of a given sample of series in dynamic time
warping spaces (by minimizing the Fr\'echet function) is a challenging
computational problem, so far solved by several heuristic and inexact
strategies. We spot some inaccuracies in the literature on exact mean
computation in dynamic time warping spaces. Our contributions comprise an exact
dynamic program computing a mean (useful for benchmarking and evaluating known
heuristics). Based on this dynamic program, we empirically study properties
like uniqueness and length of a mean. Moreover, experimental evaluations reveal
substantial deficits of state-of-the-art heuristics in terms of their output
quality. We also give an exact polynomial-time algorithm for the special case
of binary time series.
| null |
http://arxiv.org/abs/1710.08937v3
|
http://arxiv.org/pdf/1710.08937v3.pdf
| null |
[
"Markus Brill",
"Till Fluschnik",
"Vincent Froese",
"Brijnesh Jain",
"Rolf Niedermeier",
"David Schultz"
] |
[
"Benchmarking",
"Dynamic Time Warping",
"Time Series",
"Time Series Analysis"
] | 2017-10-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/closed-form-marginal-likelihood-in-gamma
|
1801.01799
| null | null |
Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization
|
We present novel understandings of the Gamma-Poisson (GaP) model, a
probabilistic matrix factorization model for count data. We show that GaP can
be rewritten free of the score/activation matrix. This gives us new insights
about the estimation of the topic/dictionary matrix by maximum marginal
likelihood estimation. In particular, this explains the robustness of this
estimator to over-specified values of the factorization rank, especially its
ability to automatically prune irrelevant dictionary columns, as empirically
observed in previous work. The marginalization of the activation matrix leads
in turn to a new Monte Carlo Expectation-Maximization algorithm with favorable
properties.
| null |
http://arxiv.org/abs/1801.01799v2
|
http://arxiv.org/pdf/1801.01799v2.pdf
|
ICML 2018 7
|
[
"Louis Filstroff",
"Alberto Lumbreras",
"Cédric Févotte"
] |
[
"Form"
] | 2018-01-05T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2240
|
http://proceedings.mlr.press/v80/filstroff18a/filstroff18a.pdf
|
closed-form-marginal-likelihood-in-gamma-1
| null |
[] |
https://paperswithcode.com/paper/one-shot-domain-adaptation-in-multiple
|
1805.12415
| null | null |
One-shot domain adaptation in multiple sclerosis lesion segmentation using convolutional neural networks
|
In recent years, several convolutional neural network (CNN) methods have been
proposed for the automated white matter lesion segmentation of multiple
sclerosis (MS) patient images, due to their superior performance compared with
those of other state-of-the-art methods. However, the accuracies of CNN methods
tend to decrease significantly when evaluated on different image domains
compared with those used for training, which demonstrates the lack of
adaptability of CNNs to unseen imaging data. In this study, we analyzed the
effect of intensity domain adaptation on our recently proposed CNN-based MS
lesion segmentation method. Given a source model trained on two public MS
datasets, we investigated the transferability of the CNN model when applied to
other MRI scanners and protocols, evaluating the minimum number of annotated
images needed from the new domain and the minimum number of layers needed to
re-train to obtain comparable accuracy. Our analysis comprised MS patient data
from both a clinical center and the public ISBI2015 challenge database, which
permitted us to compare the domain adaptation capability of our model to that
of other state-of-the-art methods. For the ISBI2015 challenge, our one-shot
domain adaptation model trained using only a single image showed a performance
similar to that of other CNN methods that were fully trained using the entire
available training set, yielding a comparable human expert rater performance.
We believe that our experiments will encourage the MS community to incorporate
its use in different clinical settings with reduced amounts of annotated data.
This approach could be meaningful not only in terms of the accuracy in
delineating MS lesions but also in the related reductions in time and economic
costs derived from manual lesion labeling.
|
In recent years, several convolutional neural network (CNN) methods have been proposed for the automated white matter lesion segmentation of multiple sclerosis (MS) patient images, due to their superior performance compared with those of other state-of-the-art methods.
|
http://arxiv.org/abs/1805.12415v1
|
http://arxiv.org/pdf/1805.12415v1.pdf
| null |
[
"Sergi Valverde",
"Mostafa Salem",
"Mariano Cabezas",
"Deborah Pareto",
"Joan C. Vilanova",
"Lluís Ramió-Torrentà",
"Àlex Rovira",
"Joaquim Salvi",
"Arnau Oliver",
"Xavier Lladó"
] |
[
"Domain Adaptation",
"Lesion Segmentation"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cartoonish-sketch-based-face-editing-in
|
1703.08738
| null | null |
Cartoonish sketch-based face editing in videos using identity deformation transfer
|
We address the problem of using hand-drawn sketches to create exaggerated
deformations to faces in videos, such as enlarging the shape or modifying the
position of eyes or mouth. This task is formulated as a 3D face model
reconstruction and deformation problem. We first recover the facial identity
and expressions from the video by fitting a face morphable model for each
frame. At the same time, user's editing intention is recognized from input
sketches as a set of facial modifications. Then a novel identity deformation
algorithm is proposed to transfer these facial deformations from 2D space to
the 3D facial identity directly while preserving the facial expressions. After
an optional stage for further refining the 3D face model, these changes are
propagated to the whole video with the modified identity. Both the user study
and experimental results demonstrate that our sketching framework can help
users effectively edit facial identities in videos, while high consistency and
fidelity are ensured at the same time.
| null |
http://arxiv.org/abs/1703.08738v3
|
http://arxiv.org/pdf/1703.08738v3.pdf
| null |
[
"Long Zhao",
"Fangda Han",
"Xi Peng",
"Xun Zhang",
"Mubbasir Kapadia",
"Vladimir Pavlovic",
"Dimitris N. Metaxas"
] |
[
"Face Model"
] | 2017-03-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/duorc-towards-complex-language-understanding
|
1804.07927
| null | null |
DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension
|
We propose DuoRC, a novel dataset for Reading Comprehension (RC) that
motivates several new challenges for neural approaches in language
understanding beyond those offered by existing RC datasets. DuoRC contains
186,089 unique question-answer pairs created from a collection of 7680 pairs of
movie plots where each pair in the collection reflects two versions of the same
movie - one from Wikipedia and the other from IMDb - written by two different
authors. We asked crowdsourced workers to create questions from one version of
the plot and a different set of workers to extract or synthesize answers from
the other version. This unique characteristic of DuoRC where questions and
answers are created from different versions of a document narrating the same
underlying story, ensures by design, that there is very little lexical overlap
between the questions created from one version and the segments containing the
answer in the other version. Further, since the two versions have different
levels of plot detail, narration style, vocabulary, etc., answering questions
from the second version requires deeper language understanding and
incorporating external background knowledge. Additionally, the narrative style
of passages arising from movie plots (as opposed to typical descriptive
passages in existing datasets) exhibits the need to perform complex reasoning
over events across multiple sentences. Indeed, we observe that state-of-the-art
neural RC models which have achieved near human performance on the SQuAD
dataset, even when coupled with traditional NLP techniques to address the
challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42%
on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research
avenues wherein DuoRC could complement other RC datasets to explore novel
neural approaches for studying language understanding.
|
We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets.
|
http://arxiv.org/abs/1804.07927v4
|
http://arxiv.org/pdf/1804.07927v4.pdf
|
ACL 2018 7
|
[
"Amrita Saha",
"Rahul Aralikatte",
"Mitesh M. Khapra",
"Karthik Sankaranarayanan"
] |
[
"Descriptive",
"Reading Comprehension"
] | 2018-04-21T00:00:00 |
https://aclanthology.org/P18-1156
|
https://aclanthology.org/P18-1156.pdf
|
duorc-towards-complex-language-understanding-1
| null |
[] |
https://paperswithcode.com/paper/imitation-learning-with-concurrent-actions-in
|
1803.05402
| null | null |
Imitation Learning with Concurrent Actions in 3D Games
|
In this work we describe a novel deep reinforcement learning architecture
that allows multiple actions to be selected at every time-step in an efficient
manner. Multi-action policies allow complex behaviours to be learnt that would
otherwise be hard to achieve when using single action selection techniques. We
use both imitation learning and temporal difference (TD) reinforcement learning
(RL) to provide a 4x improvement in training time and 2.5x improvement in
performance over single action selection TD RL. We demonstrate the capabilities
of this network using a complex in-house 3D game. Mimicking the behavior of the
expert teacher significantly improves world state exploration and allows the
agents vision system to be trained more rapidly than TD RL alone. This initial
training technique kick-starts TD learning and the agent quickly learns to
surpass the capabilities of the expert.
| null |
http://arxiv.org/abs/1803.05402v5
|
http://arxiv.org/pdf/1803.05402v5.pdf
| null |
[
"Jack Harmer",
"Linus Gisslén",
"Jorge del Val",
"Henrik Holst",
"Joakim Bergdahl",
"Tom Olsson",
"Kristoffer Sjöö",
"Magnus Nordin"
] |
[
"Deep Reinforcement Learning",
"Imitation Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/breaking-down-the-ontology-alignment-task
|
1805.12402
| null | null |
Breaking-down the Ontology Alignment Task with a Lexical Index and Neural Embeddings
|
Large ontologies still pose serious challenges to state-of-the-art ontology
alignment systems. In the paper we present an approach that combines a lexical
index, a neural embedding model and locality modules to effectively divide an
input ontology matching task into smaller and more tractable matching
(sub)tasks. We have conducted a comprehensive evaluation using the datasets of
the Ontology Alignment Evaluation Initiative. The results are encouraging and
suggest that the proposed methods are adequate in practice and can be
integrated within the workflow of state-of-the-art systems.
|
Large ontologies still pose serious challenges to state-of-the-art ontology alignment systems.
|
http://arxiv.org/abs/1805.12402v1
|
http://arxiv.org/pdf/1805.12402v1.pdf
| null |
[
"Ernesto Jimenez-Ruiz",
"Asan Agibetov",
"Matthias Samwald",
"Valerie Cross"
] |
[
"Ontology Matching"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-with-unsupervised-data-labeling
|
1805.12395
| null | null |
Deep Learning with unsupervised data labeling for weeds detection on UAV images
|
In modern agriculture, usually weeds control consists in spraying herbicides
all over the agricultural field. This practice involves significant waste and
cost of herbicide for farmers and environmental pollution. One way to reduce
the cost and environmental impact is to allocate the right doses of herbicide
at the right place and at the right time (Precision Agriculture). Nowadays,
Unmanned Aerial Vehicle (UAV) is becoming an interesting acquisition system for
weeds localization and management due to its ability to obtain the images of
the entire agricultural field with a very high spatial resolution and at low
cost. Despite the important advances in UAV acquisition systems, automatic
weeds detection remains a challenging problem because of its strong similarity
with the crops. Recently Deep Learning approach has shown impressive results in
different complex classification problem. However, this approach needs a
certain amount of training data but, creating large agricultural datasets with
pixel-level annotations by expert is an extremely time consuming task. In this
paper, we propose a novel fully automatic learning method using Convolutional
Neuronal Networks (CNNs) with unsupervised training dataset collection for
weeds detection from UAV images. The proposed method consists in three main
phases. First we automatically detect the crop lines and using them to identify
the interline weeds. In the second phase, interline weeds are used to
constitute the training dataset. Finally, we performed CNNs on this dataset to
build a model able to detect the crop and weeds in the images. The results
obtained are comparable to the traditional supervised training data labeling.
The accuracy gaps are 1.5% in the spinach field and 6% in the bean field.
| null |
http://arxiv.org/abs/1805.12395v1
|
http://arxiv.org/pdf/1805.12395v1.pdf
| null |
[
"M. Dian. Bah",
"Adel Hafiane",
"Raphael Canals"
] |
[
"Management"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kg2-learning-to-reason-science-exam-questions
|
1805.12393
| null | null |
KG^2: Learning to Reason Science Exam Questions with Contextual Knowledge Graph Embeddings
|
The AI2 Reasoning Challenge (ARC), a new benchmark dataset for question
answering (QA) has been recently released. ARC only contains natural science
questions authored for human exams, which are hard to answer and require
advanced logic reasoning. On the ARC Challenge Set, existing state-of-the-art
QA systems fail to significantly outperform random baseline, reflecting the
difficult nature of this task. In this paper, we propose a novel framework for
answering science exam questions, which mimics human solving process in an
open-book exam. To address the reasoning challenge, we construct contextual
knowledge graphs respectively for the question itself and supporting sentences.
Our model learns to reason with neural embeddings of both knowledge graphs.
Experiments on the ARC Challenge Set show that our model outperforms the
previous state-of-the-art QA systems.
| null |
http://arxiv.org/abs/1805.12393v1
|
http://arxiv.org/pdf/1805.12393v1.pdf
| null |
[
"Yuyu Zhang",
"Hanjun Dai",
"Kamil Toraman",
"Le Song"
] |
[
"AI2 Reasoning Challenge",
"ARC",
"Knowledge Graph Embeddings",
"Knowledge Graphs",
"Question Answering"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-discriminative-sim-to-real
|
1709.05746
| null | null |
Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies
|
Various approaches have been proposed to learn visuo-motor policies for
real-world robotic applications. One solution is first learning in simulation
then transferring to the real world. In the transfer, most existing approaches
need real-world images with labels. However, the labelling process is often
expensive or even impractical in many robotic applications. In this paper, we
propose an adversarial discriminative sim-to-real transfer approach to reduce
the cost of labelling real data. The effectiveness of the approach is
demonstrated with modular networks in a table-top object reaching task where a
7 DoF arm is controlled in velocity mode to reach a blue cuboid in clutter
through visual observations. The adversarial transfer approach reduced the
labelled real data requirement by 50%. Policies can be transferred to real
environments with only 93 labelled and 186 unlabelled real images. The
transferred visuo-motor policies are robust to novel (not seen in training)
objects in clutter and even a moving target, achieving a 97.8% success rate and
1.8 cm control accuracy.
|
Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images.
|
http://arxiv.org/abs/1709.05746v2
|
http://arxiv.org/pdf/1709.05746v2.pdf
| null |
[
"Fangyi Zhang",
"Jürgen Leitner",
"ZongYuan Ge",
"Michael Milford",
"Peter Corke"
] |
[] | 2017-09-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sample-reuse-via-importance-sampling-in
|
1805.12388
| null | null |
Sample Reuse via Importance Sampling in Information Geometric Optimization
|
In this paper we propose a technique to reduce the number of function
evaluations, which is often the bottleneck of the black-box optimization, in
the information geometric optimization (IGO) that is a generic framework of the
probability model-based black-box optimization algorithms and generalizes
several well-known evolutionary algorithms, such as the population-based
incremental learning (PBIL) and the pure rank-$\mu$ update covariance matrix
adaptation evolution strategy (CMA-ES). In each iteration, the IGO algorithms
update the parameters of the probability distribution to the natural gradient
direction estimated by Monte-Carlo with the samples drawn from the current
distribution. Our strategy is to reuse previously generated and evaluated
samples based on the importance sampling. It is a technique to reduce the
estimation variance without introducing a bias in Monte-Carlo estimation. We
apply the sample reuse technique to the PBIL and the pure rank-$\mu$ update
CMA-ES and empirically investigate its effect. The experimental results show
that the sample reuse helps to reduce the number of function evaluations on
many benchmark functions for both the PBIL and the pure rank-$\mu$ update
CMA-ES. Moreover, we demonstrate how to combine the importance sampling
technique with a variant of the CMA-ES involving an algorithmic component that
is not derived in the IGO framework.
| null |
http://arxiv.org/abs/1805.12388v1
|
http://arxiv.org/pdf/1805.12388v1.pdf
| null |
[
"Shinichi Shirakawa",
"Youhei Akimoto",
"Kazuki Ouchi",
"Kouzou Ohara"
] |
[
"Evolutionary Algorithms",
"Incremental Learning"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/agents-and-devices-a-relative-definition-of
|
1805.12387
| null | null |
Agents and Devices: A Relative Definition of Agency
|
According to Dennett, the same system may be described using a `physical'
(mechanical) explanatory stance, or using an `intentional' (belief- and
goal-based) explanatory stance. Humans tend to find the physical stance more
helpful for certain systems, such as planets orbiting a star, and the
intentional stance for others, such as living animals. We define a formal
counterpart of physical and intentional stances within computational theory: a
description of a system as either a device, or an agent, with the key
difference being that `devices' are directly described in terms of an
input-output mapping, while `agents' are described in terms of the function
they optimise. Bayes' rule can then be applied to calculate the subjective
probability of a system being a device or an agent, based only on its
behaviour. We illustrate this using the trajectories of an object in a toy
grid-world domain.
| null |
http://arxiv.org/abs/1805.12387v1
|
http://arxiv.org/pdf/1805.12387v1.pdf
| null |
[
"Laurent Orseau",
"Simon McGregor McGill",
"Shane Legg"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semeval-2019-shared-task-cross-lingual
|
1805.12386
| null | null |
SemEval 2019 Shared Task: Cross-lingual Semantic Parsing with UCCA - Call for Participation
|
We announce a shared task on UCCA parsing in English, German and French, and call for participants to submit their systems. UCCA is a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. Given the success of recent semantic parsing shared tasks (on SDP and AMR), we expect the task to have a significant contribution to the advancement of UCCA parsing in particular, and semantic parsing in general. Furthermore, existing applications for semantic evaluation that are based on UCCA will greatly benefit from better automatic methods for UCCA parsing. The competition website is https://competitions.codalab.org/competitions/19160
| null |
https://arxiv.org/abs/1805.12386v4
|
https://arxiv.org/pdf/1805.12386v4.pdf
| null |
[
"Daniel Hershcovich",
"Leshem Choshen",
"Elior Sulem",
"Zohar Aizenbud",
"Ari Rappoport",
"Omri Abend"
] |
[
"Semantic Parsing",
"UCCA Parsing"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/imbalanced-ensemble-classifier-for-learning
|
1805.12381
| null | null |
Imbalanced Ensemble Classifier for learning from imbalanced business school data set
|
Private business schools in India face a common problem of selecting quality
students for their MBA programs to achieve the desired placement percentage.
Generally, such data sets are biased towards one class, i.e., imbalanced in
nature. And learning from the imbalanced dataset is a difficult proposition.
This paper proposes an imbalanced ensemble classifier which can handle the
imbalanced nature of the dataset and achieves higher accuracy in case of the
feature selection (selection of important characteristics of students) cum
classification problem (prediction of placements based on the students'
characteristics) for Indian business school dataset. The optimal value of an
important model parameter is found. Numerical evidence is also provided using
Indian business school dataset to assess the outstanding performance of the
proposed classifier.
| null |
http://arxiv.org/abs/1805.12381v2
|
http://arxiv.org/pdf/1805.12381v2.pdf
| null |
[
"Tanujit Chakraborty"
] |
[
"feature selection",
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sample-efficient-deep-reinforcement-learning-2
|
1805.12375
| null |
BJvWjcgAZ
|
Sample-Efficient Deep Reinforcement Learning via Episodic Backward Update
|
We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation. In contrast to the conventional use of the experience replay with uniform random sampling, our agent samples a whole episode and successively propagates the value of a state to its previous states. Our computationally efficient recursive algorithm allows sparse and delayed rewards to propagate directly through all transitions of the sampled episode. We theoretically prove the convergence of the EBU method and experimentally demonstrate its performance in both deterministic and stochastic environments. Especially in 49 games of Atari 2600 domain, EBU achieves the same mean and median human normalized performance of DQN by using only 5% and 10% of samples, respectively.
|
We propose Episodic Backward Update (EBU) - a novel deep reinforcement learning algorithm with a direct value propagation.
|
https://arxiv.org/abs/1805.12375v3
|
https://arxiv.org/pdf/1805.12375v3.pdf
|
ICLR 2018 1
|
[
"Su Young Lee",
"Sungik Choi",
"Sae-Young Chung"
] |
[
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/8484-sample-efficient-deep-reinforcement-learning-via-episodic-backward-update
|
http://papers.nips.cc/paper/8484-sample-efficient-deep-reinforcement-learning-via-episodic-backward-update.pdf
|
sample-efficient-deep-reinforcement-learning-5
| null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\\_{t} = \\left(s\\_{t}, a\\_{t}, r\\_{t}, s\\_{t+1}\\right)$ in a data-set $D = e\\_{1}, \\cdots, e\\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.\r\n\r\nImage Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524)",
"full_name": "Experience Replay",
"introduced_year": 1993,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Experience Replay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **DQN**, or Deep Q-Network, approximates a state-value function in a [Q-Learning](https://paperswithcode.com/method/q-learning) framework with a neural network. In the Atari Games case, they take in several frames of the game as an input and output state values for each action as an output. \r\n\r\nIt is usually used in conjunction with [Experience Replay](https://paperswithcode.com/method/experience-replay), for storing the episode steps in memory for off-policy learning, where samples are drawn from the replay memory at random. Additionally, the Q-Network is usually optimized towards a frozen target network that is periodically updated with the latest weights every $k$ steps (where $k$ is a hyperparameter). The latter makes training more stable by preventing short-term oscillations from a moving target. The former tackles autocorrelation that would occur from on-line learning, and having a replay memory makes the problem more like a supervised learning problem.\r\n\r\nImage Source: [here](https://www.researchgate.net/publication/319643003_Autonomous_Quadrotor_Landing_using_Deep_Reinforcement_Learning)",
"full_name": "Deep Q-Network",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Q-Learning Networks",
"parent": "Off-Policy TD Control"
},
"name": "DQN",
"source_title": "Playing Atari with Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1312.5602v1"
}
] |
https://paperswithcode.com/paper/learning-tree-distributions-by-hidden-markov
|
1805.12372
| null | null |
Learning Tree Distributions by Hidden Markov Models
|
Hidden tree Markov models allow learning distributions for tree structured
data while being interpretable as nondeterministic automata. We provide a
concise summary of the main approaches in literature, focusing in particular on
the causality assumptions introduced by the choice of a specific tree visit
direction. We will then sketch a novel non-parametric generalization of the
bottom-up hidden tree Markov model with its interpretation as a
nondeterministic tree automaton with infinite states.
| null |
http://arxiv.org/abs/1805.12372v1
|
http://arxiv.org/pdf/1805.12372v1.pdf
| null |
[
"Davide Bacciu",
"Daniele Castellana"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lip-reading-using-convolutional-auto-encoders
|
1805.12371
| null | null |
Lip Reading Using Convolutional Auto Encoders as Feature Extractor
|
Visual recognition of speech using the lip movement is called Lip-reading.
Recent developments in this nascent field uses different neural networks as
feature extractors which serve as input to a model which can map the temporal
relationship and classify. Though end to end sentence level Lip-reading is the
current trend, we proposed a new model which employs word level classification
and breaks the set benchmarks for standard datasets. In our model we use
convolutional autoencoders as feature extractors which are then fed to a Long
short-term memory model. We tested our proposed model on BBC's LRW dataset,
MIRACL-VC1 and GRID dataset. Achieving a classification accuracy of 98% on
MIRACL-VC1 as compared to 93.4% of the set benchmark (Rekik et al., 2014). On
BBC's LRW the proposed model performed better than the baseline model of
convolutional neural networks and Long short-term memory model (Garg et al.,
2016). Showing the features learned by the models we clearly indicate how the
proposed model works better than the baseline model. The same model can also be
extended for end to end sentence level classification.
| null |
http://arxiv.org/abs/1805.12371v1
|
http://arxiv.org/pdf/1805.12371v1.pdf
| null |
[
"Dharin Parekh",
"Ankitesh Gupta",
"Shharrnam Chhatpar",
"Anmol Yash Kumar",
"Manasi Kulkarni"
] |
[
"Classification",
"General Classification",
"Lip Reading",
"Sentence"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reinforced-continual-learning
|
1805.12369
| null | null |
Reinforced Continual Learning
|
Most artificial intelligence models have limiting ability to solve new tasks
faster, without forgetting previously acquired knowledge. The recently emerging
paradigm of continual learning aims to solve this issue, in which the model
learns various tasks in a sequential fashion. In this work, a novel approach
for continual learning is proposed, which searches for the best neural
architecture for each coming task via sophisticatedly designed reinforcement
learning strategies. We name it as Reinforced Continual Learning. Our method
not only has good performance on preventing catastrophic forgetting but also
fits new tasks well. The experiments on sequential classification tasks for
variants of MNIST and CIFAR-100 datasets demonstrate that the proposed approach
outperforms existing continual learning alternatives for deep networks.
|
In this work, a novel approach for continual learning is proposed, which searches for the best neural architecture for each coming task via sophisticatedly designed reinforcement learning strategies.
|
http://arxiv.org/abs/1805.12369v1
|
http://arxiv.org/pdf/1805.12369v1.pdf
|
NeurIPS 2018 12
|
[
"Ju Xu",
"Zhanxing Zhu"
] |
[
"Continual Learning",
"General Classification",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7369-reinforced-continual-learning
|
http://papers.nips.cc/paper/7369-reinforced-continual-learning.pdf
|
reinforced-continual-learning-1
| null |
[] |
https://paperswithcode.com/paper/lower-bounds-on-regret-for-noisy-gaussian
|
1706.00090
| null | null |
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
|
In this paper, we consider the problem of sequentially optimizing a black-box
function $f$ based on noisy samples and bandit feedback. We assume that $f$ is
smooth in the sense of having a bounded norm in some reproducing kernel Hilbert
space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian
process bandit optimization. We provide algorithm-independent lower bounds on
the simple regret, measuring the suboptimality of a single point reported after
$T$ rounds, and on the cumulative regret, measuring the sum of regrets over the
$T$ chosen points. For the isotropic squared-exponential kernel in $d$
dimensions, we find that an average simple regret of $\epsilon$ requires $T =
\Omega\big(\frac{1}{\epsilon^2} (\log\frac{1}{\epsilon})^{d/2}\big)$, and the
average cumulative regret is at least $\Omega\big( \sqrt{T(\log T)^{d/2}}
\big)$, thus matching existing upper bounds up to the replacement of $d/2$ by
$2d+O(1)$ in both cases. For the Mat\'ern-$\nu$ kernel, we give analogous
bounds of the form $\Omega\big( (\frac{1}{\epsilon})^{2+d/\nu}\big)$ and
$\Omega\big( T^{\frac{\nu + d}{2\nu + d}} \big)$, and discuss the resulting
gaps to the existing upper bounds.
| null |
http://arxiv.org/abs/1706.00090v3
|
http://arxiv.org/pdf/1706.00090v3.pdf
| null |
[
"Jonathan Scarlett",
"Ilijia Bogunovic",
"Volkan Cevher"
] |
[] | 2017-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/forgetting-memories-and-their-attractiveness
|
1805.12368
| null | null |
Forgetting Memories and their Attractiveness
|
We study numerically the memory which forgets, introduced in 1986 by Parisi
by bounding the synaptic strength, with a mechanism which avoid confusion,
allows to remember the pattern learned more recently and has a physiologically
very well defined meaning. We analyze a number of features of the learning at
finite number of neurons and finite number of patterns. We discuss how the
system behaves in the large but finite N limit. We analyze the basin of
attraction of the patterns that have been learned, and we show that it is
exponentially small in the age of the pattern. This is a clearly non
physiological feature of the model.
| null |
http://arxiv.org/abs/1805.12368v1
|
http://arxiv.org/pdf/1805.12368v1.pdf
| null |
[
"Enzo Marinari"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/light-field-denoising-via-anisotropic
|
1805.12358
| null | null |
Light Field Denoising via Anisotropic Parallax Analysis in a CNN Framework
|
Light field (LF) cameras provide perspective information of scenes by taking
directional measurements of the focusing light rays. The raw outputs are
usually dark with additive camera noise, which impedes subsequent processing
and applications. We propose a novel LF denoising framework based on
anisotropic parallax analysis (APA). Two convolutional neural networks are
jointly designed for the task: first, the structural parallax synthesis network
predicts the parallax details for the entire LF based on a set of anisotropic
parallax features. These novel features can efficiently capture the high
frequency perspective components of a LF from noisy observations. Second, the
view-dependent detail compensation network restores non-Lambertian variation to
each LF view by involving view-specific spatial energies. Extensive experiments
show that the proposed APA LF denoiser provides a much better denoising
performance than state-of-the-art methods in terms of visual quality and in
preservation of parallax details.
| null |
http://arxiv.org/abs/1805.12358v2
|
http://arxiv.org/pdf/1805.12358v2.pdf
| null |
[
"Jie Chen",
"Junhui Hou",
"Lap-Pui Chau"
] |
[
"Denoising"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/design-of-experiments-for-model
|
1802.04170
| null | null |
Design of Experiments for Model Discrimination Hybridising Analytical and Data-Driven Approaches
|
Healthcare companies must submit pharmaceutical drugs or medical devices to
regulatory bodies before marketing new technology. Regulatory bodies frequently
require transparent and interpretable computational modelling to justify a new
healthcare technology, but researchers may have several competing models for a
biological system and too little data to discriminate between the models. In
design of experiments for model discrimination, the goal is to design maximally
informative physical experiments in order to discriminate between rival
predictive models. Prior work has focused either on analytical approaches,
which cannot manage all functions, or on data-driven approaches, which may have
computational difficulties or lack interpretable marginal predictive
distributions. We develop a methodology introducing Gaussian process surrogates
in lieu of the original mechanistic models. We thereby extend existing design
and model discrimination methods developed for analytical models to cases of
non-analytical models in a computationally efficient manner.
| null |
http://arxiv.org/abs/1802.04170v2
|
http://arxiv.org/pdf/1802.04170v2.pdf
|
ICML 2018 7
|
[
"Simon Olofsson",
"Marc Peter Deisenroth",
"Ruth Misener"
] |
[
"Marketing"
] | 2018-02-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2236
|
http://proceedings.mlr.press/v80/olofsson18a/olofsson18a.pdf
|
design-of-experiments-for-model-1
| null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-energy-using-energy-functions-for
|
1805.12355
| null | null |
Deep-Energy: Unsupervised Training of Deep Neural Networks
|
The success of deep learning has been due, in no small part, to the availability of large annotated datasets. Thus, a major bottleneck in current learning pipelines is the time-consuming human annotation of data. In scenarios where such input-output pairs cannot be collected, simulation is often used instead, leading to a domain-shift between synthesized and real-world data. This work offers an unsupervised alternative that relies on the availability of task-specific energy functions, replacing the generic supervised loss. Such energy functions are assumed to lead to the desired label as their minimizer given the input. The proposed approach, termed "Deep Energy", trains a Deep Neural Network (DNN) to approximate this minimization for any chosen input. Once trained, a simple and fast feed-forward computation provides the inferred label. This approach allows us to perform unsupervised training of DNNs with real-world inputs only, and without the need for manually-annotated labels, nor synthetically created data. "Deep Energy" is demonstrated in this paper on three different tasks -- seeded segmentation, image matting and single image dehazing -- exposing its generality and wide applicability. Our experiments show that the solution provided by the network is often much better in quality than the one obtained by a direct minimization of the energy function, suggesting an added regularization property in our scheme.
|
The success of deep learning has been due, in no small part, to the availability of large annotated datasets.
|
https://arxiv.org/abs/1805.12355v2
|
https://arxiv.org/pdf/1805.12355v2.pdf
| null |
[
"Alona Golts",
"Daniel Freedman",
"Michael Elad"
] |
[
"Image Dehazing",
"Image Matting",
"Single Image Dehazing"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/classification-of-volcanic-ash-particles
|
1805.12353
| null | null |
Classification of volcanic ash particles using a convolutional neural network and probability
|
Analyses of volcanic ash are typically performed either by qualitatively
classifying ash particles by eye or by quantitatively parameterizing its shape
and texture. While complex shapes can be classified through qualitative
analyses, the results are subjective due to the difficulty of categorizing
complex shapes into a single class. Although quantitative analyses are
objective, selection of shape parameters is required. Here, we applied a
convolutional neural network (CNN) for the classification of volcanic ash.
First, we defined four basal particle shapes (blocky, vesicular, elongated,
rounded) generated by different eruption mechanisms (e.g., brittle
fragmentation), and then trained the CNN using particles composed of only one
basal shape. The CNN could recognize the basal shapes with over 90% accuracy.
Using the trained network, we classified ash particles composed of multiple
basal shapes based on the output of the network, which can be interpreted as a
mixing ratio of the four basal shapes. Clustering of samples by the averaged
probabilities and the intensity is consistent with the eruption type. The
mixing ratio output by the CNN can be used to quantitatively classify complex
shapes in nature without categorizing forcibly and without the need for shape
parameters, which may lead to a new taxonomy.
| null |
http://arxiv.org/abs/1805.12353v1
|
http://arxiv.org/pdf/1805.12353v1.pdf
| null |
[
"Daigo Shoji",
"Rina Noguchi",
"Shizuka Otsuki",
"Hideitsu Hino"
] |
[
"Clustering",
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dialogwae-multimodal-response-generation-with
|
1805.12352
| null |
BkgBvsC9FQ
|
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
|
Variational autoencoders~(VAEs) have shown a promise in data-driven
conversation modeling. However, most VAE conversation models match the
approximate posterior distribution over the latent variables to a simple prior
such as standard normal distribution, thereby restricting the generated
responses to a relatively simple (e.g., unimodal) scope. In this paper, we
propose DialogWAE, a conditional Wasserstein autoencoder~(WAE) specially
designed for dialogue modeling. Unlike VAEs that impose a simple distribution
over the latent variables, DialogWAE models the distribution of data by
training a GAN within the latent variable space. Specifically, our model
samples from the prior and posterior distributions over the latent variables by
transforming context-dependent random noise using neural networks and minimizes
the Wasserstein distance between the two distributions. We further develop a
Gaussian mixture prior network to enrich the latent space. Experiments on two
popular datasets show that DialogWAE outperforms the state-of-the-art
approaches in generating more coherent, informative and diverse responses.
|
Variational autoencoders~(VAEs) have shown a promise in data-driven conversation modeling.
|
http://arxiv.org/abs/1805.12352v2
|
http://arxiv.org/pdf/1805.12352v2.pdf
|
ICLR 2019 5
|
[
"Xiaodong Gu",
"Kyunghyun Cho",
"Jung-Woo Ha",
"Sunghun Kim"
] |
[
"Response Generation"
] | 2018-05-31T00:00:00 |
https://openreview.net/forum?id=BkgBvsC9FQ
|
https://openreview.net/pdf?id=BkgBvsC9FQ
|
dialogwae-multimodal-response-generation-with-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/crowdsourcing-for-reminiscence-chatbot-design
|
1805.12346
| null | null |
Crowdsourcing for Reminiscence Chatbot Design
|
In this work-in-progress paper we discuss the challenges in identifying
effective and scalable crowd-based strategies for designing content,
conversation logic, and meaningful metrics for a reminiscence chatbot targeted
at older adults. We formalize the problem and outline the main research
questions that drive the research agenda in chatbot design for reminiscence and
for relational agents for older adults in general.
| null |
http://arxiv.org/abs/1805.12346v1
|
http://arxiv.org/pdf/1805.12346v1.pdf
| null |
[
"Svetlana Nikitina",
"Florian Daniel",
"Marcos Baez",
"Fabio Casati"
] |
[
"Chatbot"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hallucinating-robots-inferring-obstacle
|
1805.12338
| null | null |
Hallucinating robots: Inferring Obstacle Distances from Partial Laser Measurements
|
Many mobile robots rely on 2D laser scanners for localization, mapping, and
navigation. However, those sensors are unable to correctly provide distance to
obstacles such as glass panels and tables whose actual occupancy is invisible
at the height the sensor is measuring. In this work, instead of estimating the
distance to obstacles from richer sensor readings such as 3D lasers or RGBD
sensors, we present a method to estimate the distance directly from raw 2D
laser data. To learn a mapping from raw 2D laser distances to obstacle
distances we frame the problem as a learning task and train a neural network
formed as an autoencoder. A novel configuration of network hyperparameters is
proposed for the task at hand and is quantitatively validated on a test set.
Finally, we qualitatively demonstrate in real time on a Care-O-bot 4 that the
trained network can successfully infer obstacle distances from partial 2D laser
readings.
|
However, those sensors are unable to correctly provide distance to obstacles such as glass panels and tables whose actual occupancy is invisible at the height the sensor is measuring.
|
http://arxiv.org/abs/1805.12338v2
|
http://arxiv.org/pdf/1805.12338v2.pdf
| null |
[
"Jens Lundell",
"Francesco Verdoja",
"Ville Kyrki"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-representation-power-of-neural-network
|
1805.12332
| null | null |
On representation power of neural network-based graph embedding and beyond
|
We consider the representation power of siamese-style similarity functions
used in neural network-based graph embedding. The inner product similarity
(IPS) with feature vectors computed via neural networks is commonly used for
representing the strength of association between two nodes. However, only a
little work has been done on the representation capability of IPS. A very
recent work shed light on the nature of IPS and reveals that IPS has the
capability of approximating any positive definite (PD) similarities. However, a
simple example demonstrates the fundamental limitation of IPS to approximate
non-PD similarities. We then propose a novel model named Shifted IPS (SIPS)
that approximates any Conditionally PD (CPD) similarities arbitrary well. CPD
is a generalization of PD with many examples such as negative Poincar\'e
distance and negative Wasserstein distance, thus SIPS has a potential impact to
significantly improve the applicability of graph embedding without taking great
care in configuring the similarity function. Our numerical experiments
demonstrate the SIPS's superiority over IPS. In theory, we further extend SIPS
beyond CPD by considering the inner product in Minkowski space so that it
approximates more general similarities.
| null |
http://arxiv.org/abs/1805.12332v2
|
http://arxiv.org/pdf/1805.12332v2.pdf
| null |
[
"Akifumi Okuno",
"Hidetoshi Shimodaira"
] |
[
"Graph Embedding"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/simultaneous-optical-flow-and-segmentation
|
1805.12326
| null | null |
Simultaneous Optical Flow and Segmentation (SOFAS) using Dynamic Vision Sensor
|
We present an algorithm (SOFAS) to estimate the optical flow of events
generated by a dynamic vision sensor (DVS). Where traditional cameras produce
frames at a fixed rate, DVSs produce asynchronous events in response to
intensity changes with a high temporal resolution. Our algorithm uses the fact
that events are generated by edges in the scene to not only estimate the
optical flow but also to simultaneously segment the image into objects which
are travelling at the same velocity. This way it is able to avoid the aperture
problem which affects other implementations such as Lucas-Kanade. Finally, we
show that SOFAS produces more accurate results than traditional optic flow
algorithms.
| null |
http://arxiv.org/abs/1805.12326v1
|
http://arxiv.org/pdf/1805.12326v1.pdf
| null |
[
"Timo Stoffregen",
"Lindsay Kleeman"
] |
[
"Optical Flow Estimation"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/metric-on-nonlinear-dynamical-systems-with
|
1805.12324
| null | null |
Metric on Nonlinear Dynamical Systems with Perron-Frobenius Operators
|
The development of a metric for structural data is a long-term problem in
pattern recognition and machine learning. In this paper, we develop a general
metric for comparing nonlinear dynamical systems that is defined with
Perron-Frobenius operators in reproducing kernel Hilbert spaces. Our metric
includes the existing fundamental metrics for dynamical systems, which are
basically defined with principal angles between some appropriately-chosen
subspaces, as its special cases. We also describe the estimation of our metric
from finite data. We empirically illustrate our metric with an example of
rotation dynamics in a unit disk in a complex plane, and evaluate the
performance with real-world time-series data.
|
The development of a metric for structural data is a long-term problem in pattern recognition and machine learning.
|
http://arxiv.org/abs/1805.12324v2
|
http://arxiv.org/pdf/1805.12324v2.pdf
|
NeurIPS 2018 12
|
[
"Isao Ishikawa",
"Keisuke Fujii",
"Masahiro Ikeda",
"Yuka Hashimoto",
"Yoshinobu Kawahara"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7550-metric-on-nonlinear-dynamical-systems-with-perron-frobenius-operators
|
http://papers.nips.cc/paper/7550-metric-on-nonlinear-dynamical-systems-with-perron-frobenius-operators.pdf
|
metric-on-nonlinear-dynamical-systems-with-1
| null |
[] |
https://paperswithcode.com/paper/deepminer-discovering-interpretable
|
1805.12323
| null | null |
DeepMiner: Discovering Interpretable Representations for Mammogram Classification and Explanation
|
We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions. By probing convolutional neural networks (CNNs) trained to classify cancer in mammograms, we show that many individual units in the final convolutional layer of a CNN respond strongly to diseased tissue concepts specified by the BI-RADS lexicon. After expert annotation of the interpretable units, our proposed method is able to generate explanations for CNN mammogram classification that are consistent with ground truth radiology reports on the Digital Database for Screening Mammography. We show that DeepMiner not only enables better understanding of the nuances of CNN classification decisions but also possibly discovers new visual knowledge relevant to medical diagnosis.
|
We propose DeepMiner, a framework to discover interpretable representations in deep neural networks and to build explanations for medical predictions.
|
https://arxiv.org/abs/1805.12323v2
|
https://arxiv.org/pdf/1805.12323v2.pdf
| null |
[
"Jimmy Wu",
"Bolei Zhou",
"Diondra Peck",
"Scott Hsieh",
"Vandana Dialani",
"Lester Mackey",
"Genevieve Patterson"
] |
[
"Classification",
"General Classification",
"Medical Diagnosis"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geometric-active-learning-via-enclosing-ball
|
1805.12321
| null | null |
A Divide-and-Conquer Approach to Geometric Sampling for Active Learning
|
Active learning (AL) repeatedly trains the classifier with the minimum labeling budget to improve the current classification model. The training process is usually supervised by an uncertainty evaluation strategy. However, the uncertainty evaluation always suffers from performance degeneration when the initial labeled set has insufficient labels. To completely eliminate the dependence on the uncertainty evaluation sampling in AL, this paper proposes a divide-and-conquer idea that directly transfers the AL sampling as the geometric sampling over the clusters. By dividing the points of the clusters into cluster boundary and core points, we theoretically discuss their margin distance and {hypothesis relationship}. With the advantages of cluster boundary points in the above two properties, we propose a Geometric Active Learning (GAL) algorithm by knight's tour. Experimental studies of the two reported experimental tasks including cluster boundary detection and AL classification show that the proposed GAL method significantly outperforms the state-of-the-art baselines.
| null |
https://arxiv.org/abs/1805.12321v3
|
https://arxiv.org/pdf/1805.12321v3.pdf
| null |
[
"Xiaofeng Cao"
] |
[
"Active Learning",
"Boundary Detection",
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mba-mini-batch-auc-optimization
|
1805.11221
| null | null |
MBA: Mini-Batch AUC Optimization
|
Area under the receiver operating characteristics curve (AUC) is an important
metric for a wide range of signal processing and machine learning problems, and
scalable methods for optimizing AUC have recently been proposed. However,
handling very large datasets remains an open challenge for this problem. This
paper proposes a novel approach to AUC maximization, based on sampling
mini-batches of positive/negative instance pairs and computing U-statistics to
approximate a global risk minimization problem. The resulting algorithm is
simple, fast, and learning-rate free. We show that the number of samples
required for good performance is independent of the number of pairs available,
which is a quadratic function of the positive and negative instances. Extensive
experiments show the practical utility of the proposed method.
| null |
http://arxiv.org/abs/1805.11221v2
|
http://arxiv.org/pdf/1805.11221v2.pdf
| null |
[
"San Gultekin",
"Avishek Saha",
"Adwait Ratnaparkhi",
"John Paisley"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multiaccuracy-black-box-post-processing-for
|
1805.12317
| null | null |
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
|
Prediction systems are successfully deployed in applications ranging from
disease diagnosis, to predicting credit worthiness, to image recognition. Even
when the overall accuracy is high, these systems may exhibit systematic biases
that harm specific subpopulations; such biases may arise inadvertently due to
underrepresentation in the data used to train a machine-learning model, or as
the result of intentional malicious discrimination. We develop a rigorous
framework of *multiaccuracy* auditing and post-processing to ensure accurate
predictions across *identifiable subgroups*.
Our algorithm, MULTIACCURACY-BOOST, works in any setting where we have
black-box access to a predictor and a relatively small set of labeled data for
auditing; importantly, this black-box framework allows for improved fairness
and accountability of predictions, even when the predictor is minimally
transparent. We prove that MULTIACCURACY-BOOST converges efficiently and show
that if the initial model is accurate on an identifiable subgroup, then the
post-processed model will be also. We experimentally demonstrate the
effectiveness of the approach to improve the accuracy among minority subgroups
in diverse applications (image classification, finance, population health).
Interestingly, MULTIACCURACY-BOOST can improve subpopulation accuracy (e.g. for
"black women") even when the sensitive features (e.g. "race", "gender") are not
given to the algorithm explicitly.
|
Prediction systems are successfully deployed in applications ranging from disease diagnosis, to predicting credit worthiness, to image recognition.
|
http://arxiv.org/abs/1805.12317v2
|
http://arxiv.org/pdf/1805.12317v2.pdf
| null |
[
"Michael P. Kim",
"Amirata Ghorbani",
"James Zou"
] |
[
"Classification",
"Fairness",
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/greedy-attack-and-gumbel-attack-generating
|
1805.12316
| null |
ByghKiC5YX
|
Greedy Attack and Gumbel Attack: Generating Adversarial Examples for Discrete Data
|
We present a probabilistic framework for studying adversarial attacks on
discrete data. Based on this framework, we derive a perturbation-based method,
Greedy Attack, and a scalable learning-based method, Gumbel Attack, that
illustrate various tradeoffs in the design of attacks. We demonstrate the
effectiveness of these methods using both quantitative metrics and human
evaluation on various state-of-the-art models for text classification,
including a word-based CNN, a character-based CNN and an LSTM. As as example of
our results, we show that the accuracy of character-based convolutional
networks drops to the level of random selection by modifying only five
characters through Greedy Attack.
| null |
http://arxiv.org/abs/1805.12316v1
|
http://arxiv.org/pdf/1805.12316v1.pdf
| null |
[
"Puyudi Yang",
"Jianbo Chen",
"Cho-Jui Hsieh",
"Jane-Ling Wang",
"Michael. I. Jordan"
] |
[
"General Classification",
"text-classification",
"Text Classification"
] | 2018-05-31T00:00:00 |
https://openreview.net/forum?id=ByghKiC5YX
|
https://openreview.net/pdf?id=ByghKiC5YX
| null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fast-context-annotated-classification-of
|
1806.02374
| null | null |
Fast Context-Annotated Classification of Different Types of Web Service Descriptions
|
In the recent rapid growth of web services, IoT, and cloud computing, many
web services and APIs appeared on the web. With the failure of global UDDI
registries, different service repositories started to appear, trying to list
and categorize various types of web services for client applications' discover
and use. In order to increase the effectiveness and speed up the task of
finding compatible Web Services in the brokerage when performing service
composition or suggesting Web Services to the requests, high-level
functionality of the service needs to be determined. Due to the lack of
structured support for specifying such functionality, classification of
services into a set of abstract categories is necessary. We employ a wide range
of Machine Learning and Signal Processing algorithms and techniques in order to
find the highest precision achievable in the scope of this article for the fast
classification of three type of service descriptions: WSDL, REST, and WADL. In
addition, we complement our approach by showing the importance and effect of
contextual information on the classification of the service descriptions and
show that it improves the accuracy in 5 different categories of services.
| null |
http://arxiv.org/abs/1806.02374v1
|
http://arxiv.org/pdf/1806.02374v1.pdf
| null |
[
"Serguei A. Mokhov",
"Joey Paquet",
"Arash Khodadadi"
] |
[
"Classification",
"Cloud Computing",
"General Classification",
"Service Composition"
] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/distributed-collaborative-hashing-and-its
|
1804.04918
| null | null |
Distributed Collaborative Hashing and Its Applications in Ant Financial
|
Collaborative filtering, especially latent factor model, has been popularly
used in personalized recommendation. Latent factor model aims to learn user and
item latent factors from user-item historic behaviors. To apply it into real
big data scenarios, efficiency becomes the first concern, including offline
model training efficiency and online recommendation efficiency. In this paper,
we propose a Distributed Collaborative Hashing (DCH) model which can
significantly improve both efficiencies. Specifically, we first propose a
distributed learning framework, following the state-of-the-art parameter server
paradigm, to learn the offline collaborative model. Our model can be learnt
efficiently by distributedly computing subgradients in minibatches on workers
and updating model parameters on servers asynchronously. We then adopt hashing
technique to speedup the online recommendation procedure. Recommendation can be
quickly made through exploiting lookup hash tables. We conduct thorough
experiments on two real large-scale datasets. The experimental results
demonstrate that, comparing with the classic and state-of-the-art (distributed)
latent factor models, DCH has comparable performance in terms of recommendation
accuracy but has both fast convergence speed in offline model training
procedure and realtime efficiency in online recommendation procedure.
Furthermore, the encouraging performance of DCH is also shown for several
real-world applications in Ant Financial.
| null |
http://arxiv.org/abs/1804.04918v3
|
http://arxiv.org/pdf/1804.04918v3.pdf
| null |
[
"Chaochao Chen",
"Ziqi Liu",
"Peilin Zhao",
"Longfei Li",
"Jun Zhou",
"Xiaolong Li"
] |
[
"Collaborative Filtering"
] | 2018-04-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/grow-and-prune-compact-fast-and-accurate
|
1805.11797
| null | null |
Grow and Prune Compact, Fast, and Accurate LSTMs
|
Long short-term memory (LSTM) has been widely used for sequential data
modeling. Researchers have increased LSTM depth by stacking LSTM cells to
improve performance. This incurs model redundancy, increases run-time delay,
and makes the LSTMs more prone to overfitting. To address these problems, we
propose a hidden-layer LSTM (H-LSTM) that adds hidden layers to LSTM's original
one level non-linear control gates. H-LSTM increases accuracy while employing
fewer external stacked layers, thus reducing the number of parameters and
run-time latency significantly. We employ grow-and-prune (GP) training to
iteratively adjust the hidden layers through gradient-based growth and
magnitude-based pruning of connections. This learns both the weights and the
compact architecture of H-LSTM control gates. We have GP-trained H-LSTMs for
image captioning and speech recognition applications. For the NeuralTalk
architecture on the MSCOCO dataset, our three models reduce the number of
parameters by 38.7x [floating-point operations (FLOPs) by 45.5x], run-time
latency by 4.5x, and improve the CIDEr score by 2.6. For the DeepSpeech2
architecture on the AN4 dataset, our two models reduce the number of parameters
by 19.4x (FLOPs by 23.5x), run-time latency by 15.7%, and the word error rate
from 12.9% to 8.7%. Thus, GP-trained H-LSTMs can be seen to be compact, fast,
and accurate.
| null |
http://arxiv.org/abs/1805.11797v2
|
http://arxiv.org/pdf/1805.11797v2.pdf
| null |
[
"Xiaoliang Dai",
"Hongxu Yin",
"Niraj K. Jha"
] |
[
"Image Captioning",
"speech-recognition",
"Speech Recognition"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/conformation-clustering-of-long-md-protein
|
1805.12313
| null | null |
Conformation Clustering of Long MD Protein Dynamics with an Adversarial Autoencoder
|
Recent developments in specialized computer hardware have greatly accelerated
atomic level Molecular Dynamics (MD) simulations. A single GPU-attached cluster
is capable of producing microsecond-length trajectories in reasonable amounts
of time. Multiple protein states and a large number of microstates associated
with folding and with the function of the protein can be observed as
conformations sampled in the trajectories. Clustering those conformations,
however, is needed for identifying protein states, evaluating transition rates
and understanding protein behavior. In this paper, we propose a novel
data-driven generative conformation clustering method based on the adversarial
autoencoder (AAE) and provide the associated software implementation Cong. The
method was tested using a 208 microseconds MD simulation of the fast-folding
peptide Trp-Cage (20 residues) obtained from the D.E. Shaw Research Group. The
proposed clustering algorithm identifies many of the salient features of the
folding process by grouping a large number of conformations that share common
features not easily identifiable in the trajectory.
| null |
http://arxiv.org/abs/1805.12313v1
|
http://arxiv.org/pdf/1805.12313v1.pdf
| null |
[
"Yunlong Liu",
"L. Mario Amzel"
] |
[
"Clustering",
"GPU"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/attention-based-lstm-for-psychological-stress
|
1805.12307
| null | null |
Attention-Based LSTM for Psychological Stress Detection from Spoken Language Using Distant Supervision
|
We propose a Long Short-Term Memory (LSTM) with attention mechanism to
classify psychological stress from self-conducted interview transcriptions. We
apply distant supervision by automatically labeling tweets based on their
hashtag content, which complements and expands the size of our corpus. This
additional data is used to initialize the model parameters, and which it is
fine-tuned using the interview data. This improves the model's robustness,
especially by expanding the vocabulary size. The bidirectional LSTM model with
attention is found to be the best model in terms of accuracy (74.1%) and
f-score (74.3%). Furthermore, we show that distant supervision fine-tuning
enhances the model's performance by 1.6% accuracy and 2.1% f-score. The
attention mechanism helps the model to select informative words.
|
The bidirectional LSTM model with attention is found to be the best model in terms of accuracy (74. 1%) and f-score (74. 3%).
|
http://arxiv.org/abs/1805.12307v1
|
http://arxiv.org/pdf/1805.12307v1.pdf
| null |
[
"Genta Indra Winata",
"Onno Pepijn Kampman",
"Pascale Fung"
] |
[] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/data-science-is-sciences-second-chance-to-get
|
1804.10846
| null | null |
Data science is science's second chance to get causal inference right: A classification of data science tasks
|
Causal inference from observational data is the goal of many data analyses in
the health and social sciences. However, academic statistics has often frowned
upon data analyses with a causal objective. The introduction of the term "data
science" provides a historic opportunity to redefine data analysis in such a
way that it naturally accommodates causal inference from observational data.
Like others before, we organize the scientific contributions of data science
into three classes of tasks: Description, prediction, and counterfactual
prediction (which includes causal inference). An explicit classification of
data science tasks is necessary to discuss the data, assumptions, and analytics
required to successfully accomplish each task. We argue that a failure to
adequately describe the role of subject-matter expert knowledge in data
analysis is a source of widespread misunderstandings about data science.
Specifically, causal analyses typically require not only good data and
algorithms, but also domain expert knowledge. We discuss the implications for
the use of data science to guide decision-making in the real world and to train
data scientists.
| null |
http://arxiv.org/abs/1804.10846v6
|
http://arxiv.org/pdf/1804.10846v6.pdf
| null |
[
"Miguel A. Hernán",
"John Hsu",
"Brian Healy"
] |
[
"Causal Inference",
"counterfactual",
"Decision Making",
"General Classification"
] | 2018-04-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal inference",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/adversarial-attacks-on-face-detectors-using
|
1805.12302
| null | null |
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization
|
Adversarial attacks involve adding, small, often imperceptible, perturbations
to inputs with the goal of getting a machine learning model to misclassifying
them. While many different adversarial attack strategies have been proposed on
image classification models, object detection pipelines have been much harder
to break. In this paper, we propose a novel strategy to craft adversarial
examples by solving a constrained optimization problem using an adversarial
generator network. Our approach is fast and scalable, requiring only a forward
pass through our trained generator network to craft an adversarial sample.
Unlike in many attack strategies, we show that the same trained generator is
capable of attacking new images without explicitly optimizing on them. We
evaluate our attack on a trained Faster R-CNN face detector on the cropped
300-W face dataset where we manage to reduce the number of detected faces to
$0.5\%$ of all originally detected faces. In a different experiment, also on
300-W, we demonstrate the robustness of our attack to a JPEG compression based
defense typical JPEG compression level of $75\%$ reduces the effectiveness of
our attack from only $0.5\%$ of detected faces to a modest $5.0\%$.
| null |
http://arxiv.org/abs/1805.12302v1
|
http://arxiv.org/pdf/1805.12302v1.pdf
| null |
[
"Avishek Joey Bose",
"Parham Aarabi"
] |
[
"Adversarial Attack",
"image-classification",
"Image Classification",
"object-detection",
"Object Detection"
] | 2018-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
}
] |
https://paperswithcode.com/paper/rotation-equivariance-and-invariance-in
|
1805.12301
| null | null |
Rotation Equivariance and Invariance in Convolutional Neural Networks
|
Performance of neural networks can be significantly improved by encoding
known invariance for particular tasks. Many image classification tasks, such as
those related to cellular imaging, exhibit invariance to rotation. We present a
novel scheme using the magnitude response of the 2D-discrete-Fourier transform
(2D-DFT) to encode rotational invariance in neural networks, along with a new,
efficient convolutional scheme for encoding rotational equivariance throughout
convolutional layers. We implemented this scheme for several image
classification tasks and demonstrated improved performance, in terms of
classification accuracy, time required to train the model, and robustness to
hyperparameter selection, over a standard CNN and another state-of-the-art
method.
|
Performance of neural networks can be significantly improved by encoding known invariance for particular tasks.
|
http://arxiv.org/abs/1805.12301v1
|
http://arxiv.org/pdf/1805.12301v1.pdf
| null |
[
"Benjamin Chidester",
"Minh N. Do",
"Jian Ma"
] |
[
"Classification",
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/algebraic-expression-of-subjective-spatial
|
1805.11959
| null | null |
Algebraic Expression of Subjective Spatial and Temporal Patterns
|
Universal learning machine is a theory trying to study machine learning from
mathematical point of view. The outside world is reflected inside an universal
learning machine according to pattern of incoming data. This is subjective
pattern of learning machine. In [2,4], we discussed subjective spatial pattern,
and established a powerful tool -- X-form, which is an algebraic expression for
subjective spatial pattern. However, as the initial stage of study, there we
only discussed spatial pattern. Here, we will discuss spatial and temporal
patterns, and algebraic expression for them.
| null |
http://arxiv.org/abs/1805.11959v2
|
http://arxiv.org/pdf/1805.11959v2.pdf
| null |
[
"Chuyu Xiong"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/evaluating-reinforcement-learning-algorithms
|
1805.12298
| null | null |
Evaluating Reinforcement Learning Algorithms in Observational Health Settings
|
Much attention has been devoted recently to the development of machine
learning algorithms with the goal of improving treatment policies in
healthcare. Reinforcement learning (RL) is a sub-field within machine learning
that is concerned with learning how to make sequences of decisions so as to
optimize long-term effects. Already, RL algorithms have been proposed to
identify decision-making strategies for mechanical ventilation, sepsis
management and treatment of schizophrenia. However, before implementing
treatment policies learned by black-box algorithms in high-stakes clinical
decision problems, special care must be taken in the evaluation of these
policies.
In this document, our goal is to expose some of the subtleties associated
with evaluating RL algorithms in healthcare. We aim to provide a conceptual
starting point for clinical and computational researchers to ask the right
questions when designing and evaluating algorithms for new ways of treating
patients. In the following, we describe how choices about how to summarize a
history, variance of statistical estimators, and confounders in more ad-hoc
measures can result in unreliable, even misleading estimates of the quality of
a treatment policy. We also provide suggestions for mitigating these
effects---for while there is much promise for mining observational health data
to uncover better treatment policies, evaluation must be performed
thoughtfully.
| null |
http://arxiv.org/abs/1805.12298v1
|
http://arxiv.org/pdf/1805.12298v1.pdf
| null |
[
"Omer Gottesman",
"Fredrik Johansson",
"Joshua Meier",
"Jack Dent",
"Dong-hun Lee",
"Srivatsan Srinivasan",
"Linying Zhang",
"Yi Ding",
"David Wihl",
"Xuefeng Peng",
"Jiayu Yao",
"Isaac Lage",
"Christopher Mosch",
"Li-wei H. Lehman",
"Matthieu Komorowski",
"Aldo Faisal",
"Leo Anthony Celi",
"David Sontag",
"Finale Doshi-Velez"
] |
[
"BIG-bench Machine Learning",
"Decision Making",
"Management",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/root-cause-analysis-for-time-series-anomalies
|
1805.12296
| null | null |
Root-cause Analysis for Time-series Anomalies via Spatiotemporal Graphical Modeling in Distributed Complex Systems
|
Performance monitoring, anomaly detection, and root-cause analysis in complex
cyber-physical systems (CPSs) are often highly intractable due to widely
diverse operational modes, disparate data types, and complex fault propagation
mechanisms. This paper presents a new data-driven framework for root-cause
analysis, based on a spatiotemporal graphical modeling approach built on the
concept of symbolic dynamics for discovering and representing causal
interactions among sub-systems of complex CPSs. We formulate the root-cause
analysis problem as a minimization problem via the proposed inference based
metric and present two approximate approaches for root-cause analysis, namely
the sequential state switching ($S^3$, based on free energy concept of a
restricted Boltzmann machine, RBM) and artificial anomaly association ($A^3$, a
classification framework using deep neural networks, DNN). Synthetic data from
cases with failed pattern(s) and anomalous node(s) are simulated to validate
the proposed approaches. Real dataset based on Tennessee Eastman process (TEP)
is also used for comparison with other approaches. The results show that: (1)
$S^3$ and $A^3$ approaches can obtain high accuracy in root-cause analysis
under both pattern-based and node-based fault scenarios, in addition to
successfully handling multiple nominal operating modes, (2) the proposed
tool-chain is shown to be scalable while maintaining high accuracy, and (3) the
proposed framework is robust and adaptive in different fault conditions and
performs better in comparison with the state-of-the-art methods.
| null |
http://arxiv.org/abs/1805.12296v1
|
http://arxiv.org/pdf/1805.12296v1.pdf
| null |
[
"Chao Liu",
"Kin Gwn Lore",
"Zhanhong Jiang",
"Soumik Sarkar"
] |
[
"Anomaly Detection",
"Time Series",
"Time Series Analysis"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dynamic-advisor-based-ensemble-dynabe-case
|
1805.12111
| null | null |
Dynamic Advisor-Based Ensemble (dynABE): Case study in stock trend prediction of critical metal companies
|
Stock trend prediction is a challenging task due to the market's noise, and
machine learning techniques have recently been successful in coping with this
challenge. In this research, we create a novel framework for stock prediction,
Dynamic Advisor-Based Ensemble (dynABE). dynABE explores domain-specific areas
based on the companies of interest, diversifies the feature set by creating
different "advisors" that each handles a different area, follows an effective
model ensemble procedure for each advisor, and combines the advisors together
in a second-level ensemble through an online update strategy we developed.
dynABE is able to adapt to price pattern changes of the market during the
active trading period robustly, without needing to retrain the entire model. We
test dynABE on three cobalt-related companies, and it achieves the best-case
misclassification error of 31.12% and an annualized absolute return of 359.55%
with zero maximum drawdown. dynABE also consistently outperforms the baseline
models of support vector machine, neural network, and random forest in all case
studies.
| null |
http://arxiv.org/abs/1805.12111v4
|
http://arxiv.org/pdf/1805.12111v4.pdf
| null |
[
"Zhengyang Dong"
] |
[
"Stock Prediction",
"Stock Trend Prediction",
"Time Series Analysis"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/image-dependent-local-entropy-models-for
|
1805.12295
| null | null |
Image-Dependent Local Entropy Models for Learned Image Compression
|
The leading approach for image compression with artificial neural networks
(ANNs) is to learn a nonlinear transform and a fixed entropy model that are
optimized for rate-distortion performance. We show that this approach can be
significantly improved by incorporating spatially local, image-dependent
entropy models. The key insight is that existing ANN-based methods learn an
entropy model that is shared between the encoder and decoder, but they do not
transmit any side information that would allow the model to adapt to the
structure of a specific image. We present a method for augmenting ANN-based
image coders with image-dependent side information that leads to a 17.8% rate
reduction over a state-of-the-art ANN-based baseline model on a standard
evaluation set, and 70-98% reductions on images with low visual complexity that
are poorly captured by a fixed, global entropy model.
| null |
http://arxiv.org/abs/1805.12295v1
|
http://arxiv.org/pdf/1805.12295v1.pdf
| null |
[
"David Minnen",
"George Toderici",
"Saurabh Singh",
"Sung Jin Hwang",
"Michele Covell"
] |
[
"Decoder",
"Image Compression"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-constraint-learning-for
|
1805.10561
| null | null |
Adversarial Constraint Learning for Structured Prediction
|
Constraint-based learning reduces the burden of collecting labels by having
users specify general properties of structured outputs, such as constraints
imposed by physical laws. We propose a novel framework for simultaneously
learning these constraints and using them for supervision, bypassing the
difficulty of using domain expertise to manually specify constraints. Learning
requires a black-box simulator of structured outputs, which generates valid
labels, but need not model their corresponding inputs or the input-label
relationship. At training time, we constrain the model to produce outputs that
cannot be distinguished from simulated labels by adversarial training.
Providing our framework with a small number of labeled inputs gives rise to a
new semi-supervised structured prediction model; we evaluate this model on
multiple tasks --- tracking, pose estimation and time series prediction --- and
find that it achieves high accuracy with only a small number of labeled inputs.
In some cases, no labels are required at all.
|
Constraint-based learning reduces the burden of collecting labels by having users specify general properties of structured outputs, such as constraints imposed by physical laws.
|
http://arxiv.org/abs/1805.10561v2
|
http://arxiv.org/pdf/1805.10561v2.pdf
| null |
[
"Hongyu Ren",
"Russell Stewart",
"Jiaming Song",
"Volodymyr Kuleshov",
"Stefano Ermon"
] |
[
"Pose Estimation",
"Prediction",
"Structured Prediction",
"Time Series",
"Time Series Analysis",
"Time Series Prediction",
"valid"
] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/empirical-evaluation-of-character-based-model
|
1805.12291
| null | null |
Empirical Evaluation of Character-Based Model on Neural Named-Entity Recognition in Indonesian Conversational Texts
|
Despite the long history of named-entity recognition (NER) task in the
natural language processing community, previous work rarely studied the task on
conversational texts. Such texts are challenging because they contain a lot of
word variations which increase the number of out-of-vocabulary (OOV) words. The
high number of OOV words poses a difficulty for word-based neural models.
Meanwhile, there is plenty of evidence to the effectiveness of character-based
neural models in mitigating this OOV problem. We report an empirical evaluation
of neural sequence labeling models with character embedding to tackle NER task
in Indonesian conversational texts. Our experiments show that (1) character
models outperform word embedding-only models by up to 4 $F_1$ points, (2)
character models perform better in OOV cases with an improvement of as high as
15 $F_1$ points, and (3) character models are robust against a very high OOV
rate.
|
We report an empirical evaluation of neural sequence labeling models with character embedding to tackle NER task in Indonesian conversational texts.
|
http://arxiv.org/abs/1805.12291v3
|
http://arxiv.org/pdf/1805.12291v3.pdf
|
WS 2018 11
|
[
"Kemal Kurniawan",
"Samuel Louvan"
] |
[
"named-entity-recognition",
"Named Entity Recognition",
"Named Entity Recognition (NER)",
"NER"
] | 2018-05-31T00:00:00 |
https://aclanthology.org/W18-6112
|
https://aclanthology.org/W18-6112.pdf
|
empirical-evaluation-of-character-based-model-1
| null |
[] |
https://paperswithcode.com/paper/data-driven-root-cause-analysis-for
|
1605.06421
| null | null |
Data-driven root-cause analysis for distributed system anomalies
|
Modern distributed cyber-physical systems encounter a large variety of
anomalies and in many cases, they are vulnerable to catastrophic fault
propagation scenarios due to strong connectivity among the sub-systems. In this
regard, root-cause analysis becomes highly intractable due to complex fault
propagation mechanisms in combination with diverse operating modes. This paper
presents a new data-driven framework for root-cause analysis for addressing
such issues. The framework is based on a spatiotemporal feature extraction
scheme for distributed cyber-physical systems built on the concept of symbolic
dynamics for discovering and representing causal interactions among subsystems
of a complex system. We present two approaches for root-cause analysis, namely
the sequential state switching ($S^3$, based on free energy concept of a
Restricted Boltzmann Machine, RBM) and artificial anomaly association ($A^3$, a
multi-class classification framework using deep neural networks, DNN).
Synthetic data from cases with failed pattern(s) and anomalous node are
simulated to validate the proposed approaches, then compared with the
performance of vector autoregressive (VAR) model-based root-cause analysis.
Real dataset based on Tennessee Eastman process (TEP) is also used for
validation. The results show that: (1) $S^3$ and $A^3$ approaches can obtain
high accuracy in root-cause analysis and successfully handle multiple nominal
operation modes, and (2) the proposed tool-chain is shown to be scalable while
maintaining high accuracy.
| null |
http://arxiv.org/abs/1605.06421v2
|
http://arxiv.org/pdf/1605.06421v2.pdf
| null |
[
"Chao Liu",
"Kin Gwn Lore",
"Soumik Sarkar"
] |
[
"Multi-class Classification"
] | 2016-05-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-traffic-sign-recognition-with-scale
|
1805.12289
| null | null |
Efficient Traffic-Sign Recognition with Scale-aware CNN
|
The paper presents a Traffic Sign Recognition (TSR) system, which can fast
and accurately recognize traffic signs of different sizes in images. The system
consists of two well-designed Convolutional Neural Networks (CNNs), one for
region proposals of traffic signs and one for classification of each region. In
the proposal CNN, a Fully Convolutional Network (FCN) with a dual multi-scale
architecture is proposed to achieve scale invariant detection. In training the
proposal network, a modified "Online Hard Example Mining" (OHEM) scheme is
adopted to suppress false positives. The classification network fuses
multi-scale features as representation and adopts an "Inception" module for
efficiency. We evaluate the proposed TSR system and its components with
extensive experiments. Our method obtains $99.88\%$ precision and $96.61\%$
recall on the Swedish Traffic Signs Dataset (STSD), higher than
state-of-the-art methods. Besides, our system is faster and more lightweight
than state-of-the-art deep learning networks for traffic sign recognition.
| null |
http://arxiv.org/abs/1805.12289v1
|
http://arxiv.org/pdf/1805.12289v1.pdf
| null |
[
"Yuchen Yang",
"Shuo Liu",
"Wei Ma",
"Qiuyuan Wang",
"Zheng Liu"
] |
[
"General Classification",
"Traffic Sign Recognition"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-mixture-model-for-aggregation-of-multiple
|
1806.00003
| null | null |
A mixture model for aggregation of multiple pre-trained weak classifiers
|
Deep networks have gained immense popularity in Computer Vision and other
fields in the past few years due to their remarkable performance on
recognition/classification tasks surpassing the state-of-the art. One of the
keys to their success lies in the richness of the automatically learned
features. In order to get very good accuracy, one popular option is to increase
the depth of the network. Training such a deep network is however infeasible or
impractical with moderate computational resources and budget. The other
alternative to increase the performance is to learn multiple weak classifiers
and boost their performance using a boosting algorithm or a variant thereof.
But, one of the problems with boosting algorithms is that they require a
re-training of the networks based on the misclassified samples. Motivated by
these problems, in this work we propose an aggregation technique which combines
the output of multiple weak classifiers. We formulate the aggregation problem
using a mixture model fitted to the trained classifier outputs. Our model does
not require any re-training of the `weak' networks and is computationally very
fast (takes $<30$ seconds to run in our experiments). Thus, using a less
expensive training stage and without doing any re-training of networks, we
experimentally demonstrate that it is possible to boost the performance by
$12\%$. Furthermore, we present experiments using hand-crafted features and
improved the classification performance using the proposed aggregation
technique. One of the major advantages of our framework is that our framework
allows one to combine features that are very likely to be of distinct
dimensions since they are extracted using different networks/algorithms. Our
experimental results demonstrate a significant performance gain from the use of
our aggregation technique at a very small computational cost.
| null |
http://arxiv.org/abs/1806.00003v1
|
http://arxiv.org/pdf/1806.00003v1.pdf
| null |
[
"Rudrasis Chakraborty",
"Chun-Hao Yang",
"Baba C. Vemuri"
] |
[
"General Classification"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/graph-edge-convolutional-neural-networks-for
|
1805.06184
| null | null |
Graph Edge Convolutional Neural Networks for Skeleton Based Action Recognition
|
This paper investigates body bones from skeleton data for skeleton based
action recognition. Body joints, as the direct result of mature pose estimation
technologies, are always the key concerns of traditional action recognition
methods. However, instead of joints, we humans naturally identify how the human
body moves according to shapes, lengths and places of bones, which are more
obvious and stable for observation. Hence given graphs generated from skeleton
data, we propose to develop convolutions over graph edges that correspond to
bones in human skeleton. We describe an edge by integrating its spatial
neighboring edges to explore the cooperation between different bones, as well
as its temporal neighboring edges to address the consistency of movements in an
action. A graph edge convolutional neural network is then designed for skeleton
based action recognition. Considering the complementarity between graph node
convolution and graph edge convolution, we additionally construct two hybrid
neural networks to combine graph node convolutional neural network and graph
edge convolutional neural network using shared intermediate layers.
Experimental results on Kinetics and NTU-RGB+D datasets demonstrate that our
graph edge convolution is effective to capture characteristic of actions and
our graph edge convolutional neural network significantly outperforms existing
state-of-art skeleton based action recognition methods. Additionally, more
performance improvements can be achieved by the hybrid networks.
| null |
http://arxiv.org/abs/1805.06184v2
|
http://arxiv.org/pdf/1805.06184v2.pdf
| null |
[
"Xikun Zhang",
"Chang Xu",
"Xinmei Tian",
"DaCheng Tao"
] |
[
"Action Recognition",
"Pose Estimation",
"Skeleton Based Action Recognition",
"Temporal Action Localization"
] | 2018-05-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/deep-learning-with-cinematic-rendering-fine
|
1805.08400
| null | null |
Deep Learning with Cinematic Rendering: Fine-Tuning Deep Neural Networks Using Photorealistic Medical Images
|
Deep learning has emerged as a powerful artificial intelligence tool to
interpret medical images for a growing variety of applications. However, the
paucity of medical imaging data with high-quality annotations that is necessary
for training such methods ultimately limits their performance. Medical data is
challenging to acquire due to privacy issues, shortage of experts available for
annotation, limited representation of rare conditions and cost. This problem
has previously been addressed by using synthetically generated data. However,
networks trained on synthetic data often fail to generalize to real data.
Cinematic rendering simulates the propagation and interaction of light passing
through tissue models reconstructed from CT data, enabling the generation of
photorealistic images. In this paper, we present one of the first applications
of cinematic rendering in deep learning, in which we propose to fine-tune
synthetic data-driven networks using cinematically rendered CT data for the
task of monocular depth estimation in endoscopy. Our experiments demonstrate
that: (a) Convolutional Neural Networks (CNNs) trained on synthetic data and
fine-tuned on photorealistic cinematically rendered data adapt better to real
medical images and demonstrate more robust performance when compared to
networks with no fine-tuning, (b) these fine-tuned networks require less
training data to converge to an optimal solution, and (c) fine-tuning with data
from a variety of photorealistic rendering conditions of the same scene
prevents the network from learning patient-specific information and aids in
generalizability of the model. Our empirical evaluation demonstrates that
networks fine-tuned with cinematically rendered data predict depth with 56.87%
less error for rendered endoscopy images and 27.49% less error for real porcine
colon endoscopy images.
| null |
http://arxiv.org/abs/1805.08400v3
|
http://arxiv.org/pdf/1805.08400v3.pdf
| null |
[
"Faisal Mahmood",
"Richard Chen",
"Sandra Sudarsky",
"Daphne Yu",
"Nicholas J. Durr"
] |
[
"Depth Estimation",
"Monocular Depth Estimation"
] | 2018-05-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-impact-of-various-types-of-noise-on
|
1805.12282
| null | null |
On the Impact of Various Types of Noise on Neural Machine Translation
|
We examine how various types of noise in the parallel training data impact
the quality of neural machine translation systems. We create five types of
artificial noise and analyze how they degrade performance in neural and
statistical machine translation. We find that neural models are generally more
harmed by noise than statistical models. For one especially egregious type of
noise they learn to just copy the input sentence.
|
We examine how various types of noise in the parallel training data impact the quality of neural machine translation systems.
|
http://arxiv.org/abs/1805.12282v1
|
http://arxiv.org/pdf/1805.12282v1.pdf
|
WS 2018 7
|
[
"Huda Khayrallah",
"Philipp Koehn"
] |
[
"Machine Translation",
"Sentence",
"Translation"
] | 2018-05-31T00:00:00 |
https://aclanthology.org/W18-2709
|
https://aclanthology.org/W18-2709.pdf
|
on-the-impact-of-various-types-of-noise-on-1
| null |
[] |
https://paperswithcode.com/paper/bayesian-pose-graph-optimization-via-bingham
|
1805.12279
| null | null |
Bayesian Pose Graph Optimization via Bingham Distributions and Tempered Geodesic MCMC
|
We introduce Tempered Geodesic Markov Chain Monte Carlo (TG-MCMC) algorithm
for initializing pose graph optimization problems, arising in various scenarios
such as SFM (structure from motion) or SLAM (simultaneous localization and
mapping). TG-MCMC is first of its kind as it unites asymptotically global
non-convex optimization on the spherical manifold of quaternions with posterior
sampling, in order to provide both reliable initial poses and uncertainty
estimates that are informative about the quality of individual solutions. We
devise rigorous theoretical convergence guarantees for our method and
extensively evaluate it on synthetic and real benchmark datasets. Besides its
elegance in formulation and theory, we show that our method is robust to
missing data, noise and the estimated uncertainties capture intuitive
properties of the data.
| null |
http://arxiv.org/abs/1805.12279v2
|
http://arxiv.org/pdf/1805.12279v2.pdf
|
NeurIPS 2018 12
|
[
"Tolga Birdal",
"Umut Şimşekli",
"M. Onur Eken",
"Slobodan Ilic"
] |
[
"Simultaneous Localization and Mapping"
] | 2018-05-31T00:00:00 |
http://papers.nips.cc/paper/7314-bayesian-pose-graph-optimization-via-bingham-distributions-and-tempered-geodesic-mcmc
|
http://papers.nips.cc/paper/7314-bayesian-pose-graph-optimization-via-bingham-distributions-and-tempered-geodesic-mcmc.pdf
|
bayesian-pose-graph-optimization-via-bingham-1
| null |
[] |
https://paperswithcode.com/paper/learning-factorized-representations-for-open
|
1805.12277
| null |
SJe3HiC5KX
|
Learning Factorized Representations for Open-set Domain Adaptation
|
Domain adaptation for visual recognition has undergone great progress in the
past few years. Nevertheless, most existing methods work in the so-called
closed-set scenario, assuming that the classes depicted by the target images
are exactly the same as those of the source domain. In this paper, we tackle
the more challenging, yet more realistic case of open-set domain adaptation,
where new, unknown classes can be present in the target data. While, in the
unsupervised scenario, one cannot expect to be able to identify each specific
new class, we aim to automatically detect which samples belong to these new
classes and discard them from the recognition process. To this end, we rely on
the intuition that the source and target samples depicting the known classes
can be generated by a shared subspace, whereas the target samples from unknown
classes come from a different, private subspace. We therefore introduce a
framework that factorizes the data into shared and private parts, while
encouraging the shared representation to be discriminative. Our experiments on
standard benchmarks evidence that our approach significantly outperforms the
state-of-the-art in open-set domain adaptation.
|
To this end, we rely on the intuition that the source and target samples depicting the known classes can be generated by a shared subspace, whereas the target samples from unknown classes come from a different, private subspace.
|
http://arxiv.org/abs/1805.12277v1
|
http://arxiv.org/pdf/1805.12277v1.pdf
|
ICLR 2019 5
|
[
"Mahsa Baktashmotlagh",
"Masoud Faraki",
"Tom Drummond",
"Mathieu Salzmann"
] |
[
"Domain Adaptation"
] | 2018-05-31T00:00:00 |
https://openreview.net/forum?id=SJe3HiC5KX
|
https://openreview.net/pdf?id=SJe3HiC5KX
|
learning-factorized-representations-for-open-1
| null |
[] |
https://paperswithcode.com/paper/word-searching-in-scene-image-and-video-frame
|
1708.05529
| null | null |
Word Searching in Scene Image and Video Frame in Multi-Script Scenario using Dynamic Shape Coding
|
Retrieval of text information from natural scene images and video frames is a
challenging task due to its inherent problems like complex character shapes,
low resolution, background noise, etc. Available OCR systems often fail to
retrieve such information in scene/video frames. Keyword spotting, an
alternative way to retrieve information, performs efficient text searching in
such scenarios. However, current word spotting techniques in scene/video images
are script-specific and they are mainly developed for Latin script. This paper
presents a novel word spotting framework using dynamic shape coding for text
retrieval in natural scene image and video frames. The framework is designed to
search query keyword from multiple scripts with the help of on-the-fly
script-wise keyword generation for the corresponding script. We have used a
two-stage word spotting approach using Hidden Markov Model (HMM) to detect the
translated keyword in a given text line by identifying the script of the line.
A novel unsupervised dynamic shape coding based scheme has been used to group
similar shape characters to avoid confusion and to improve text alignment.
Next, the hypotheses locations are verified to improve retrieval performance.
To evaluate the proposed system for searching keyword from natural scene image
and video frames, we have considered two popular Indic scripts such as Bangla
(Bengali) and Devanagari along with English. Inspired by the zone-wise
recognition approach in Indic scripts[1], zone-wise text information has been
used to improve the traditional word spotting performance in Indic scripts. For
our experiment, a dataset consisting of images of different scenes and video
frames of English, Bangla and Devanagari scripts were considered. The results
obtained showed the effectiveness of our proposed word spotting approach.
| null |
http://arxiv.org/abs/1708.05529v6
|
http://arxiv.org/pdf/1708.05529v6.pdf
| null |
[
"Partha Pratim Roy",
"Ayan Kumar Bhunia",
"Avirup Bhattacharyya",
"Umapada Pal"
] |
[
"Keyword Spotting",
"Optical Character Recognition (OCR)",
"Retrieval",
"Text Retrieval"
] | 2017-08-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimized-participation-of-multiple-fusion
|
1805.12270
| null | null |
Optimized Participation of Multiple Fusion Functions in Consensus Creation: An Evolutionary Approach
|
Recent studies show that ensemble methods enhance the stability and
robustness of unsupervised learning. These approaches are successfully utilized
to construct multiple clustering and combine them into a one representative
consensus clustering of an improved quality. The quality of the consensus
clustering is directly depended on fusion functions used in combination. In
this article, the hierarchical clustering ensemble techniques are extended by
introducing a new evolutionary fusion function. In the proposed method,
multiple hierarchical clustering methods are generated via bagging. Thereafter,
the consensus clustering is obtained using the search capability of genetic
algorithm among different aggregated clustering methods made by different
fusion functions. Putting some popular data sets to empirical study, the
quality of the proposed method is compared with regular clustering ensembles.
Experimental results demonstrate the accuracy improvement of the aggregated
clustering results.
| null |
http://arxiv.org/abs/1805.12270v1
|
http://arxiv.org/pdf/1805.12270v1.pdf
| null |
[
"Elaheh Rashedi",
"Abdolreza Mirzaei"
] |
[
"Clustering",
"Clustering Ensemble"
] | 2018-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/grouped-convolutional-neural-networks-for
|
1703.09938
| null | null |
Grouped Convolutional Neural Networks for Multivariate Time Series
|
Analyzing multivariate time series data is important for many applications
such as automated control, fault diagnosis and anomaly detection. One of the
key challenges is to learn latent features automatically from dynamically
changing multivariate input. In visual recognition tasks, convolutional neural
networks (CNNs) have been successful to learn generalized feature extractors
with shared parameters over the spatial domain. However, when high-dimensional
multivariate time series is given, designing an appropriate CNN model structure
becomes challenging because the kernels may need to be extended through the
full dimension of the input volume. To address this issue, we present two
structure learning algorithms for deep CNN models. Our algorithms exploit the
covariance structure over multiple time series to partition input volume into
groups. The first algorithm learns the group CNN structures explicitly by
clustering individual input sequences. The second algorithm learns the group
CNN structures implicitly from the error backpropagation. In experiments with
two real-world datasets, we demonstrate that our group CNNs outperform existing
CNN based regression methods.
| null |
http://arxiv.org/abs/1703.09938v4
|
http://arxiv.org/pdf/1703.09938v4.pdf
| null |
[
"Subin Yi",
"Janghoon Ju",
"Man-Ki Yoon",
"Jaesik Choi"
] |
[
"Anomaly Detection",
"Clustering",
"Fault Diagnosis",
"Time Series",
"Time Series Analysis"
] | 2017-03-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/slsdeep-skin-lesion-segmentation-based-on
|
1805.10241
| null | null |
SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks
|
Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for
automated diagnosis of melanoma. In this paper, we present a robust deep
learning SLS model, so-called SLSDeep, which is represented as an
encoder-decoder network. The encoder network is constructed by dilated residual
layers, in turn, a pyramid pooling network followed by three convolution layers
is used for the decoder. Unlike the traditional methods employing a
cross-entropy loss, we investigated a loss function by combining both Negative
Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the
melanoma regions with sharp boundaries. The robustness of the proposed model
was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion
analysis towards melanoma detection challenge. The proposed model outperforms
the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is
capable to segment more than $100$ images of size 384x384 per second on a
recent GPU.
| null |
http://arxiv.org/abs/1805.10241v2
|
http://arxiv.org/pdf/1805.10241v2.pdf
| null |
[
"Md. Mostafa Kamal Sarker",
"Hatem A. Rashwan",
"Farhan Akram",
"Syeda Furruka Banu",
"Adel Saleh",
"Vivek Kumar Singh",
"Forhad U H Chowdhury",
"Saddam Abdulwahab",
"Santiago Romani",
"Petia Radeva",
"Domenec Puig"
] |
[
"Decoder",
"GPU",
"Lesion Segmentation",
"Segmentation",
"Skin Lesion Segmentation"
] | 2018-05-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/geometric-understanding-of-deep-learning
|
1805.10451
| null | null |
Geometric Understanding of Deep Learning
|
Deep learning is the mainstream technique for many machine learning tasks,
including image recognition, machine translation, speech recognition, and so
on. It has outperformed conventional methods in various fields and achieved
great successes. Unfortunately, the understanding on how it works remains
unclear. It has the central importance to lay down the theoretic foundation for
deep learning.
In this work, we give a geometric view to understand deep learning: we show
that the fundamental principle attributing to the success is the manifold
structure in data, namely natural high dimensional data concentrates close to a
low-dimensional manifold, deep learning learns the manifold and the probability
distribution on it.
We further introduce the concepts of rectified linear complexity for deep
neural network measuring its learning capability, rectified linear complexity
of an embedding manifold describing the difficulty to be learned. Then we show
for any deep neural network with fixed architecture, there exists a manifold
that cannot be learned by the network. Finally, we propose to apply optimal
mass transportation theory to control the probability distribution in the
latent space.
| null |
http://arxiv.org/abs/1805.10451v2
|
http://arxiv.org/pdf/1805.10451v2.pdf
| null |
[
"Na Lei",
"Zhongxuan Luo",
"Shing-Tung Yau",
"David Xianfeng Gu"
] |
[
"Deep Learning",
"Machine Translation",
"speech-recognition",
"Speech Recognition",
"Translation"
] | 2018-05-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/end-to-end-learning-for-the-deep-multivariate
|
1803.08591
| null | null |
End-to-End Learning for the Deep Multivariate Probit Model
|
The multivariate probit model (MVP) is a popular classic model for studying
binary responses of multiple entities. Nevertheless, the computational
challenge of learning the MVP model, given that its likelihood involves
integrating over a multidimensional constrained space of latent variables,
significantly limits its application in practice. We propose a flexible deep
generalization of the classic MVP, the Deep Multivariate Probit Model (DMVP),
which is an end-to-end learning scheme that uses an efficient parallel sampling
process of the multivariate probit model to exploit GPU-boosted deep neural
networks. We present both theoretical and empirical analysis of the convergence
behavior of DMVP's sampling process with respect to the resolution of the
correlation structure. We provide convergence guarantees for DMVP and our
empirical analysis demonstrates the advantages of DMVP's sampling compared with
standard MCMC-based methods. We also show that when applied to multi-entity
modelling problems, which are natural DMVP applications, DMVP trains faster
than classical MVP, by at least an order of magnitude, captures rich
correlations among entities, and further improves the joint likelihood of
entities compared with several competitive models.
| null |
http://arxiv.org/abs/1803.08591v4
|
http://arxiv.org/pdf/1803.08591v4.pdf
|
ICML 2018 7
|
[
"Di Chen",
"Yexiang Xue",
"Carla P. Gomes"
] |
[
"GPU"
] | 2018-03-22T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2259
|
http://proceedings.mlr.press/v80/chen18o/chen18o.pdf
|
end-to-end-learning-for-the-deep-multivariate-1
| null |
[] |
https://paperswithcode.com/paper/learning-to-adapt-in-dynamic-real-world
|
1803.11347
| null |
HyztsoC5Y7
|
Learning to Adapt in Dynamic, Real-World Environments Through Meta-Reinforcement Learning
|
Although reinforcement learning methods can achieve impressive results in
simulation, the real world presents two major challenges: generating samples is
exceedingly expensive, and unexpected perturbations or unseen situations cause
proficient but specialized policies to fail at test time. Given that it is
impractical to train separate policies to accommodate all situations the agent
may see in the real world, this work proposes to learn how to quickly and
effectively adapt online to new tasks. To enable sample-efficient learning, we
consider learning online adaptation in the context of model-based reinforcement
learning. Our approach uses meta-learning to train a dynamics model prior such
that, when combined with recent data, this prior can be rapidly adapted to the
local context. Our experiments demonstrate online adaptation for continuous
control tasks on both simulated and real-world agents. We first show simulated
agents adapting their behavior online to novel terrains, crippled body parts,
and highly-dynamic environments. We also illustrate the importance of
incorporating online adaptation into autonomous agents that operate in the real
world by applying our method to a real dynamic legged millirobot. We
demonstrate the agent's learned ability to quickly adapt online to a missing
leg, adjust to novel terrains and slopes, account for miscalibration or errors
in pose estimation, and compensate for pulling payloads.
|
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.
|
http://arxiv.org/abs/1803.11347v6
|
http://arxiv.org/pdf/1803.11347v6.pdf
|
ICLR 2019 5
|
[
"Anusha Nagabandi",
"Ignasi Clavera",
"Simin Liu",
"Ronald S. Fearing",
"Pieter Abbeel",
"Sergey Levine",
"Chelsea Finn"
] |
[
"continuous-control",
"Continuous Control",
"Meta-Learning",
"Meta Reinforcement Learning",
"Model-based Reinforcement Learning",
"Pose Estimation",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-30T00:00:00 |
https://openreview.net/forum?id=HyztsoC5Y7
|
https://openreview.net/pdf?id=HyztsoC5Y7
|
learning-to-adapt-in-dynamic-real-world-1
| null |
[] |
https://paperswithcode.com/paper/rehabilitating-the-colorchecker-dataset-for
|
1805.12262
| null | null |
Rehabilitating the ColorChecker Dataset for Illuminant Estimation
|
In a previous work, it was shown that there is a curious problem with the
benchmark ColorChecker dataset for illuminant estimation. To wit, this dataset
has at least 3 different sets of ground-truths. Typically, for a single
algorithm a single ground-truth is used. But then different algorithms, whose
performance is measured with respect to different ground-truths, are compared
against each other and then ranked. This makes no sense. We show in this paper
that there are also errors in how each ground-truth set was calculated. As a
result, all performance rankings based on the ColorChecker dataset - and there
are scores of these - are inaccurate.
In this paper, we re-generate a new 'recommended' set of ground-truth based
on the calculation methodology described by Shi and Funt. We then review the
performance evaluation of a range of illuminant estimation algorithms. Compared
with the legacy ground-truths, we find that the difference in how algorithms
perform can be large, with many local rankings of algorithms being reversed.
Finally, we draw the readers attention to our new 'open' data repository
which, we hope, will allow the ColorChecker set to be rehabilitated and once
again to become a useful benchmark for illuminant estimation algorithms.
| null |
http://arxiv.org/abs/1805.12262v3
|
http://arxiv.org/pdf/1805.12262v3.pdf
| null |
[
"Ghalia Hemrit",
"Graham D. Finlayson",
"Arjan Gijsenij",
"Peter Gehler",
"Simone Bianco",
"Brian Funt",
"Mark Drew",
"Lilong Shi"
] |
[] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/grader-variability-and-the-importance-of
|
1710.01711
| null | null |
Grader variability and the importance of reference standards for evaluating machine learning models for diabetic retinopathy
|
Diabetic retinopathy (DR) and diabetic macular edema are common complications
of diabetes which can lead to vision loss. The grading of DR is a fairly
complex process that requires the detection of fine features such as
microaneurysms, intraretinal hemorrhages, and intraretinal microvascular
abnormalities. Because of this, there can be a fair amount of grader
variability. There are different methods of obtaining the reference standard
and resolving disagreements between graders, and while it is usually accepted
that adjudication until full consensus will yield the best reference standard,
the difference between various methods of resolving disagreements has not been
examined extensively. In this study, we examine the variability in different
methods of grading, definitions of reference standards, and their effects on
building deep learning models for the detection of diabetic eye disease. We
find that a small set of adjudicated DR grades allows substantial improvements
in algorithm performance. The resulting algorithm's performance was on par with
that of individual U.S. board-certified ophthalmologists and retinal
specialists.
| null |
http://arxiv.org/abs/1710.01711v3
|
http://arxiv.org/pdf/1710.01711v3.pdf
| null |
[
"Jonathan Krause",
"Varun Gulshan",
"Ehsan Rahimy",
"Peter Karth",
"Kasumi Widner",
"Greg S. Corrado",
"Lily Peng",
"Dale R. Webster"
] |
[
"BIG-bench Machine Learning"
] | 2017-10-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-mention-learning-for-reading
|
1711.00894
| null |
HyRnez-RW
|
Multi-Mention Learning for Reading Comprehension with Neural Cascades
|
Reading comprehension is a challenging task, especially when executed across
longer or across multiple evidence documents, where the answer is likely to
reoccur. Existing neural architectures typically do not scale to the entire
evidence, and hence, resort to selecting a single passage in the document
(either via truncation or other means), and carefully searching for the answer
within that passage. However, in some cases, this strategy can be suboptimal,
since by focusing on a specific passage, it becomes difficult to leverage
multiple mentions of the same answer throughout the document. In this work, we
take a different approach by constructing lightweight models that are combined
in a cascade to find the answer. Each submodel consists only of feed-forward
networks equipped with an attention mechanism, making it trivially
parallelizable. We show that our approach can scale to approximately an order
of magnitude larger evidence documents and can aggregate information at the
representation level from multiple mentions of each answer candidate across the
document. Empirically, our approach achieves state-of-the-art performance on
both the Wikipedia and web domains of the TriviaQA dataset, outperforming more
complex, recurrent architectures.
| null |
http://arxiv.org/abs/1711.00894v2
|
http://arxiv.org/pdf/1711.00894v2.pdf
|
ICLR 2018 1
|
[
"Swabha Swayamdipta",
"Ankur P. Parikh",
"Tom Kwiatkowski"
] |
[
"Reading Comprehension",
"TriviaQA"
] | 2017-11-02T00:00:00 |
https://openreview.net/forum?id=HyRnez-RW
|
https://openreview.net/pdf?id=HyRnez-RW
|
multi-mention-learning-for-reading-1
| null |
[] |
https://paperswithcode.com/paper/unsupervised-depth-estimation-3d-face
|
1803.09202
| null | null |
Unsupervised Depth Estimation, 3D Face Rotation and Replacement
|
We present an unsupervised approach for learning to estimate three
dimensional (3D) facial structure from a single image while also predicting 3D
viewpoint transformations that match a desired pose and facial geometry. We
achieve this by inferring the depth of facial keypoints of an input image in an
unsupervised manner, without using any form of ground-truth depth information.
We show how it is possible to use these depths as intermediate computations
within a new backpropable loss to predict the parameters of a 3D affine
transformation matrix that maps inferred 3D keypoints of an input face to the
corresponding 2D keypoints on a desired target facial geometry or pose. Our
resulting approach, called DepthNets, can therefore be used to infer plausible
3D transformations from one face pose to another, allowing faces to be
frontalized, transformed into 3D models or even warped to another pose and
facial geometry. Lastly, we identify certain shortcomings with our formulation,
and explore adversarial image translation techniques as a post-processing step
to re-synthesize complete head shots for faces re-targeted to different poses
or identities.
|
We present an unsupervised approach for learning to estimate three dimensional (3D) facial structure from a single image while also predicting 3D viewpoint transformations that match a desired pose and facial geometry.
|
http://arxiv.org/abs/1803.09202v5
|
http://arxiv.org/pdf/1803.09202v5.pdf
|
NeurIPS 2018 12
|
[
"Joel Ruben Antony Moniz",
"Christopher Beckham",
"Simon Rajotte",
"Sina Honari",
"Christopher Pal"
] |
[
"Depth Estimation",
"Translation"
] | 2018-03-25T00:00:00 |
http://papers.nips.cc/paper/8181-unsupervised-depth-estimation-3d-face-rotation-and-replacement
|
http://papers.nips.cc/paper/8181-unsupervised-depth-estimation-3d-face-rotation-and-replacement.pdf
|
unsupervised-depth-estimation-3d-face-1
| null |
[] |
https://paperswithcode.com/paper/learning-time-sensitive-strategies-in-space
|
1805.06824
| null | null |
Learning Time-Sensitive Strategies in Space Fortress
|
Although there has been remarkable progress and impressive performance on
reinforcement learning (RL) on Atari games, there are many problems with
challenging characteristics that have not yet been explored in Deep Learning
for RL. These include reward sparsity, abrupt context-dependent reversals of
strategy and time-sensitive game play. In this paper, we present Space
Fortress, a game that incorporates all these characteristics and experimentally
show that the presence of any of these renders state of the art Deep RL
algorithms incapable of learning. Then, we present our enhancements to an
existing algorithm and show big performance increases through each enhancement
through an ablation study. We discuss how each of these enhancements was able
to help and also argue that appropriate transfer learning boosts performance.
|
Although there has been remarkable progress and impressive performance on reinforcement learning (RL) on Atari games, there are many problems with challenging characteristics that have not yet been explored in Deep Learning for RL.
|
http://arxiv.org/abs/1805.06824v4
|
http://arxiv.org/pdf/1805.06824v4.pdf
| null |
[
"Akshat Agarwal",
"Ryan Hope",
"Katia Sycara"
] |
[
"Atari Games",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Space Fortress",
"Transfer Learning"
] | 2018-05-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-resolution-3d-convolutional-neural
|
1805.12254
| null | null |
Multi-level 3D CNN for Learning Multi-scale Spatial Features
|
3D object recognition accuracy can be improved by learning the multi-scale spatial features from 3D spatial geometric representations of objects such as point clouds, 3D models, surfaces, and RGB-D data. Current deep learning approaches learn such features either using structured data representations (voxel grids and octrees) or from unstructured representations (graphs and point clouds). Learning features from such structured representations is limited by the restriction on resolution and tree depth while unstructured representations creates a challenge due to non-uniformity among data samples. In this paper, we propose an end-to-end multi-level learning approach on a multi-level voxel grid to overcome these drawbacks. To demonstrate the utility of the proposed multi-level learning, we use a multi-level voxel representation of 3D objects to perform object recognition. The multi-level voxel representation consists of a coarse voxel grid that contains volumetric information of the 3D object. In addition, each voxel in the coarse grid that contains a portion of the object boundary is subdivided into multiple fine-level voxel grids. The performance of our multi-level learning algorithm for object recognition is comparable to dense voxel representations while using significantly lower memory.
|
The multi-level voxel representation consists of a coarse voxel grid that contains volumetric information of the 3D object.
|
https://arxiv.org/abs/1805.12254v2
|
https://arxiv.org/pdf/1805.12254v2.pdf
| null |
[
"Sambit Ghadai",
"Xian Lee",
"Aditya Balu",
"Soumik Sarkar",
"Adarsh Krishnamurthy"
] |
[
"3D Object Recognition",
"Object",
"Object Recognition"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-dialogue-act-classification-for
|
1806.00522
| null | null |
Improving Dialogue Act Classification for Spontaneous Arabic Speech and Instant Messages at Utterance Level
|
The ability to model and automatically detect dialogue act is an important
step toward understanding spontaneous speech and Instant Messages. However, it
has been difficult to infer a dialogue act from a surface utterance because it
highly depends on the context of the utterance and speaker linguistic
knowledge; especially in Arabic dialects. This paper proposes a statistical
dialogue analysis model to recognize utterance's dialogue acts using a
multi-classes hierarchical structure. The model can automatically acquire
probabilistic discourse knowledge from a dialogue corpus were collected and
annotated manually from multi-genre Egyptian call-centers. Extensive
experiments were conducted using Support Vector Machines classifier to evaluate
the system performance. The results attained in the term of average F-measure
scores of 0.912; showed that the proposed approach has moderately improved
F-measure by approximately 20%.
| null |
http://arxiv.org/abs/1806.00522v1
|
http://arxiv.org/pdf/1806.00522v1.pdf
|
LREC 2018 5
|
[
"AbdelRahim Elmadany",
"Sherif Abdou",
"Mervat Gheith"
] |
[
"Dialogue Act Classification",
"General Classification"
] | 2018-05-30T00:00:00 |
https://aclanthology.org/L18-1020
|
https://aclanthology.org/L18-1020.pdf
|
improving-dialogue-act-classification-for-2
| null |
[] |
https://paperswithcode.com/paper/mining-gold-from-implicit-models-to-improve
|
1805.12244
| null | null |
Mining gold from implicit models to improve likelihood-free inference
|
Simulators often provide the best description of real-world phenomena. However, they also lead to challenging inverse problems because the density they implicitly define is often intractable. We present a new suite of simulation-based inference techniques that go beyond the traditional Approximate Bayesian Computation approach, which struggles in a high-dimensional setting, and extend methods that use surrogate models based on neural networks. We show that additional information, such as the joint likelihood ratio and the joint score, can often be extracted from simulators and used to augment the training data for these surrogate models. Finally, we demonstrate that these new techniques are more sample efficient and provide higher-fidelity inference than traditional methods.
|
Simulators often provide the best description of real-world phenomena.
|
https://arxiv.org/abs/1805.12244v4
|
https://arxiv.org/pdf/1805.12244v4.pdf
| null |
[
"Johann Brehmer",
"Gilles Louppe",
"Juan Pavez",
"Kyle Cranmer"
] |
[] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/novel-video-prediction-for-large-scale-scene
|
1805.12243
| null | null |
Novel Video Prediction for Large-scale Scene using Optical Flow
|
Making predictions of future frames is a critical challenge in autonomous
driving research. Most of the existing methods for video prediction attempt to
generate future frames in simple and fixed scenes. In this paper, we propose a
novel and effective optical flow conditioned method for the task of video
prediction with an application to complex urban scenes. In contrast with
previous work, the prediction model only requires video sequences and optical
flow sequences for training and testing. Our method uses the rich
spatial-temporal features in video sequences. The method takes advantage of the
motion information extracting from optical flow maps between neighbor images as
well as previous images. Empirical evaluations on the KITTI dataset and the
Cityscapes dataset demonstrate the effectiveness of our method.
| null |
http://arxiv.org/abs/1805.12243v1
|
http://arxiv.org/pdf/1805.12243v1.pdf
| null |
[
"Henglai Wei",
"Xiaochuan Yin",
"Penghong Lin"
] |
[
"Autonomous Driving",
"Optical Flow Estimation",
"Prediction",
"Video Prediction"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-important-is-a-neuron
|
1805.12233
| null | null |
How Important Is a Neuron?
|
The problem of attributing a deep network's prediction to its
\emph{input/base} features is well-studied. We introduce the notion of
\emph{conductance} to extend the notion of attribution to the understanding the
importance of \emph{hidden} units.
Informally, the conductance of a hidden unit of a deep network is the
\emph{flow} of attribution via this hidden unit. We use conductance to
understand the importance of a hidden unit to the prediction for a specific
input, or over a set of inputs. We evaluate the effectiveness of conductance in
multiple ways, including theoretical properties, ablation studies, and a
feature selection task. The empirical evaluations are done using the Inception
network over ImageNet data, and a sentiment analysis network over reviews. In
both cases, we demonstrate the effectiveness of conductance in identifying
interesting insights about the internal workings of these networks.
|
Informally, the conductance of a hidden unit of a deep network is the \emph{flow} of attribution via this hidden unit.
|
http://arxiv.org/abs/1805.12233v1
|
http://arxiv.org/pdf/1805.12233v1.pdf
| null |
[
"Kedar Dhamdhere",
"Mukund Sundararajan",
"Qiqi Yan"
] |
[
"feature selection",
"Sentiment Analysis"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/state-space-gaussian-processes-with-non
|
1802.04846
| null | null |
State Space Gaussian Processes with Non-Gaussian Likelihood
|
We provide a comprehensive overview and tooling for GP modeling with
non-Gaussian likelihoods using state space methods. The state space formulation
allows for solving one-dimensional GP models in $\mathcal{O}(n)$ time and
memory complexity. While existing literature has focused on the connection
between GP regression and state space methods, the computational primitives
allowing for inference using general likelihoods in combination with the
Laplace approximation (LA), variational Bayes (VB), and assumed density
filtering (ADF, a.k.a. single-sweep expectation propagation, EP) schemes has
been largely overlooked. We present means of combining the efficient
$\mathcal{O}(n)$ state space methodology with existing inference methods. We
extend existing methods, and provide unifying code implementing all approaches.
| null |
http://arxiv.org/abs/1802.04846v5
|
http://arxiv.org/pdf/1802.04846v5.pdf
|
ICML 2018 7
|
[
"Hannes Nickisch",
"Arno Solin",
"Alexander Grigorievskiy"
] |
[
"Gaussian Processes",
"regression"
] | 2018-02-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2065
|
http://proceedings.mlr.press/v80/nickisch18a/nickisch18a.pdf
|
state-space-gaussian-processes-with-non-1
| null |
[] |
https://paperswithcode.com/paper/tiling-and-stitching-segmentation-output-for
|
1805.12219
| null | null |
Tiling and Stitching Segmentation Output for Remote Sensing: Basic Challenges and Recommendations
|
In this work we consider the application of convolutional neural networks
(CNNs) for pixel-wise labeling (a.k.a., semantic segmentation) of remote
sensing imagery (e.g., aerial color or hyperspectral imagery). Remote sensing
imagery is usually stored in the form of very large images, referred to as
"tiles", which are too large to be segmented directly using most CNNs and their
associated hardware. As a result, during label inference, smaller sub-images,
called "patches", are processed individually and then "stitched" (concatenated)
back together to create a tile-sized label map. This approach suffers from
computational ineffiency and can result in discontinuities at output
boundaries. We propose a simple alternative approach in which the input size of
the CNN is dramatically increased only during label inference. This does not
avoid stitching altogether, but substantially mitigates its limitations. We
evaluate the performance of the proposed approach against a vonventional
stitching approach using two popular segmentation CNN models and two
large-scale remote sensing imagery datasets. The results suggest that the
proposed approach substantially reduces label inference time, while also
yielding modest overall label accuracy increases. This approach contributed to
our wining entry (overall performance) in the INRIA building labeling
competition.
| null |
http://arxiv.org/abs/1805.12219v3
|
http://arxiv.org/pdf/1805.12219v3.pdf
| null |
[
"Bohao Huang",
"Daniel Reichman",
"Leslie M. Collins",
"Kyle Bradbury",
"Jordan M. Malof"
] |
[
"Segmentation Of Remote Sensing Imagery",
"Semantic Segmentation"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/recurrent-deep-embedding-networks-for
|
1805.12218
| null | null |
Convolutional Embedded Networks for Population Scale Clustering and Bio-ancestry Inferencing
|
The study of genetic variants can help find correlating population groups to identify cohorts that are predisposed to common diseases and explain differences in disease susceptibility and how patients react to drugs. Machine learning algorithms are increasingly being applied to identify interacting GVs to understand their complex phenotypic traits. Since the performance of a learning algorithm not only depends on the size and nature of the data but also on the quality of underlying representation, deep neural networks can learn non-linear mappings that allow transforming GVs data into more clustering and classification friendly representations than manual feature selection. In this paper, we proposed convolutional embedded networks in which we combine two DNN architectures called convolutional embedded clustering and convolutional autoencoder classifier for clustering individuals and predicting geographic ethnicity based on GVs, respectively. We employed CAE-based representation learning on 95 million GVs from the 1000 genomes and Simons genome diversity projects. Quantitative and qualitative analyses with a focus on accuracy and scalability show that our approach outperforms state-of-the-art approaches such as VariantSpark and ADMIXTURE. In particular, CEC can cluster targeted population groups in 22 hours with an adjusted rand index of 0.915, the normalized mutual information of 0.92, and the clustering accuracy of 89%. Contrarily, the CAE classifier can predict the geographic ethnicity of unknown samples with an F1 and Mathews correlation coefficient(MCC) score of 0.9004 and 0.8245, respectively. To provide interpretations of the predictions, we identify significant biomarkers using gradient boosted trees(GBT) and SHAP. Overall, our approach is transparent and faster than the baseline methods, and scalable for 5% to 100% of the full human genome.
|
The study of genetic variants can help find correlating population groups to identify cohorts that are predisposed to common diseases and explain differences in disease susceptibility and how patients react to drugs.
|
https://arxiv.org/abs/1805.12218v2
|
https://arxiv.org/pdf/1805.12218v2.pdf
| null |
[
"Md. Rezaul Karim",
"Michael Cochez",
"Achille Zappa",
"Ratnesh Sahay",
"Oya Beyan",
"Dietrich-Rebholz Schuhmann",
"Stefan Decker"
] |
[
"Clustering",
"feature selection",
"Representation Learning"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/slundberg/shap",
"description": "**SHAP**, or **SHapley Additive exPlanations**, is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. Shapley values are approximating using Kernel SHAP, which uses a weighting kernel for the approximation, and DeepSHAP, which uses DeepLift to approximate them.",
"full_name": "Shapley Additive Explanations",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Interpretability Methods** seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.",
"name": "Interpretability",
"parent": null
},
"name": "SHAP",
"source_title": "A Unified Approach to Interpreting Model Predictions",
"source_url": "http://arxiv.org/abs/1705.07874v2"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/a-web-scale-system-for-scientific-knowledge
|
1805.12216
| null | null |
A Web-scale system for scientific knowledge exploration
|
To enable efficient exploration of Web-scale scientific knowledge, it is
necessary to organize scientific publications into a hierarchical concept
structure. In this work, we present a large-scale system to (1) identify
hundreds of thousands of scientific concepts, (2) tag these identified concepts
to hundreds of millions of scientific publications by leveraging both text and
graph structure, and (3) build a six-level concept hierarchy with a
subsumption-based model. The system builds the most comprehensive cross-domain
scientific concept ontology published to date, with more than 200 thousand
concepts and over one million relationships.
| null |
http://arxiv.org/abs/1805.12216v1
|
http://arxiv.org/pdf/1805.12216v1.pdf
|
ACL 2018 7
|
[
"Zhihong Shen",
"Hao Ma",
"Kuansan Wang"
] |
[
"Efficient Exploration",
"TAG"
] | 2018-05-30T00:00:00 |
https://aclanthology.org/P18-4015
|
https://aclanthology.org/P18-4015.pdf
|
a-web-scale-system-for-scientific-knowledge-1
| null |
[] |
https://paperswithcode.com/paper/investigating-the-impact-of-data-volume-and
|
1712.04008
| null | null |
Investigating the Impact of Data Volume and Domain Similarity on Transfer Learning Applications
|
Transfer learning allows practitioners to recognize and apply knowledge
learned in previous tasks (source task) to new tasks or new domains (target
task), which share some commonality. The two important factors impacting the
performance of transfer learning models are: (a) the size of the target
dataset, and (b) the similarity in distribution between source and target
domains. Thus far, there has been little investigation into just how important
these factors are. In this paper, we investigate the impact of target dataset
size and source/target domain similarity on model performance through a series
of experiments. We find that more data is always beneficial, and model
performance improves linearly with the log of data size, until we are out of
data. As source/target domains differ, more data is required and fine tuning
will render better performance than feature extraction. When source/target
domains are similar and data size is small, fine tuning and feature extraction
renders equivalent performance. Our hope is that by beginning this quantitative
investigation on the effect of data volume and domain similarity in transfer
learning we might inspire others to explore the significance of data in
developing more accurate statistical models.
| null |
http://arxiv.org/abs/1712.04008v4
|
http://arxiv.org/pdf/1712.04008v4.pdf
| null |
[
"Michael Bernico",
"Yuntao Li",
"Dingchao Zhang"
] |
[
"Transfer Learning"
] | 2017-12-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/too-fast-causal-inference-under-causal
|
1806.00352
| null | null |
Too Fast Causal Inference under Causal Insufficiency
|
Causally insufficient structures (models with latent or hidden variables, or
with confounding etc.) of joint probability distributions have been subject of
intense study not only in statistics, but also in various AI systems. In AI,
belief networks, being representations of joint probability distribution with
an underlying directed acyclic graph structure, are paid special attention due
to the fact that efficient reasoning (uncertainty propagation) methods have
been developed for belief network structures. Algorithms have been therefore
developed to acquire the belief network structure from data. As artifacts due
to variable hiding negatively influence the performance of derived belief
networks, models with latent variables have been studied and several algorithms
for learning belief network structure under causal insufficiency have also been
developed.
Regrettably, some of them are known already to be erroneous (e.g. IC
algorithm of [Pearl:Verma:91]. This paper is devoted to another algorithm, the
Fast Causal Inference (FCI) Algorithm of [Spirtes:93]. It is proven by a
specially constructed example that this algorithm, as it stands in
[Spirtes:93], is also erroneous. Fundamental reason for failure of this
algorithm is the temporary introduction of non-real links between nodes of the
network with the intention of later removal. While for trivial dependency
structures these non-real links may be actually removed, this may not be the
case for complex ones, e.g. for the case described in this paper. A remedy of
this failure is proposed.
| null |
http://arxiv.org/abs/1806.00352v1
|
http://arxiv.org/pdf/1806.00352v1.pdf
| null |
[
"Mieczysław A. Kłopotek"
] |
[
"Causal Inference"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal inference",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/applications-of-trajectory-data-from-the
|
1708.07193
| null | null |
Applications of Trajectory Data from the Perspective of a Road Transportation Agency: Literature Review and Maryland Case Study
|
Transportation agencies have an opportunity to leverage
increasingly-available trajectory datasets to improve their analyses and
decision-making processes. However, this data is typically purchased from
vendors, which means agencies must understand its potential benefits beforehand
in order to properly assess its value relative to the cost of acquisition.
While the literature concerned with trajectory data is rich, it is naturally
fragmented and focused on technical contributions in niche areas, which makes
it difficult for government agencies to assess its value across different
transportation domains. To overcome this issue, the current paper explores
trajectory data from the perspective of a road transportation agency interested
in acquiring trajectories to enhance its analyses. The paper provides a
literature review illustrating applications of trajectory data in six areas of
road transportation systems analysis: demand estimation, modeling human
behavior, designing public transit, traffic performance measurement and
prediction, environment and safety. In addition, it visually explores 20
million GPS traces in Maryland, illustrating existing and suggesting new
applications of trajectory data.
| null |
http://arxiv.org/abs/1708.07193v2
|
http://arxiv.org/pdf/1708.07193v2.pdf
| null |
[
"Nikola Marković",
"Przemysław Sekuła",
"Zachary Vander Laan",
"Gennady Andrienko",
"Natalia Andrienko"
] |
[
"Decision Making"
] | 2017-08-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-extended-beta-elliptic-model-and-fuzzy
|
1804.05661
| null | null |
An Extended Beta-Elliptic Model and Fuzzy Elementary Perceptual Codes for Online Multilingual Writer Identification using Deep Neural Network
|
Actually, the ability to identify the documents authors provides more chances
for using these documents for various purposes. In this paper, we present a new
effective biometric writer identification system from online handwriting. The
system consists of the preprocessing and the segmentation of online handwriting
into a sequence of Beta strokes in a first step. Then, from each stroke, we
extract a set of static and dynamic features from new proposed model that we
called Extended Beta-Elliptic model and from the Fuzzy Elementary Perceptual
Codes. Next, all the segments which are composed of N consecutive strokes are
categorized into groups and subgroups according to their position and their
geometric characteristics. Finally, Deep Neural Network is used as classifier.
Experimental results reveal that the proposed system achieves interesting
results as compared to those of the existing writer identification systems on
Latin and Arabic scripts.
| null |
http://arxiv.org/abs/1804.05661v4
|
http://arxiv.org/pdf/1804.05661v4.pdf
| null |
[
"Thameur Dhieb",
"Sourour Njah",
"Houcine Boubaker",
"Wael Ouarda",
"Mounir Ben Ayed",
"Adel M. ALIMI"
] |
[
"Position"
] | 2018-04-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/extracting-scientific-figures-with-distantly
|
1804.02445
| null | null |
Extracting Scientific Figures with Distantly Supervised Neural Networks
|
Non-textual components such as charts, diagrams and tables provide key
information in many scientific documents, but the lack of large labeled
datasets has impeded the development of data-driven methods for scientific
figure extraction. In this paper, we induce high-quality training labels for
the task of figure extraction in a large number of scientific documents, with
no human intervention. To accomplish this we leverage the auxiliary data
provided in two large web collections of scientific documents (arXiv and
PubMed) to locate figures and their associated captions in the rasterized PDF.
We share the resulting dataset of over 5.5 million induced labels---4,000 times
larger than the previous largest figure extraction dataset---with an average
precision of 96.8%, to enable the development of modern data-driven methods for
this task. We use this dataset to train a deep neural network for end-to-end
figure detection, yielding a model that can be more easily extended to new
domains compared to previous work. The model was successfully deployed in
Semantic Scholar, a large-scale academic search engine, and used to extract
figures in 13 million scientific documents.
|
Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction.
|
http://arxiv.org/abs/1804.02445v2
|
http://arxiv.org/pdf/1804.02445v2.pdf
| null |
[
"Noah Siegel",
"Nicholas Lourie",
"Russell Power",
"Waleed Ammar"
] |
[] | 2018-04-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fine-pruning-defending-against-backdooring
|
1805.12185
| null | null |
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
|
Deep neural networks (DNNs) provide excellent performance across a wide range
of classification tasks, but their training requires high computational
resources and is often outsourced to third parties. Recent work has shown that
outsourced training introduces the risk that a malicious trainer will return a
backdoored DNN that behaves normally on most inputs but causes targeted
misclassifications or degrades the accuracy of the network when a trigger known
only to the attacker is present. In this paper, we provide the first effective
defenses against backdoor attacks on DNNs. We implement three backdoor attacks
from prior work and use them to investigate two promising defenses, pruning and
fine-tuning. We show that neither, by itself, is sufficient to defend against
sophisticated attackers. We then evaluate fine-pruning, a combination of
pruning and fine-tuning, and show that it successfully weakens or even
eliminates the backdoors, i.e., in some cases reducing the attack success rate
to 0% with only a 0.4% drop in accuracy for clean (non-triggering) inputs. Our
work provides the first step toward defenses against backdoor attacks in deep
neural networks.
|
Our work provides the first step toward defenses against backdoor attacks in deep neural networks.
|
http://arxiv.org/abs/1805.12185v1
|
http://arxiv.org/pdf/1805.12185v1.pdf
| null |
[
"Kang Liu",
"Brendan Dolan-Gavitt",
"Siddharth Garg"
] |
[] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/context-exploitation-using-hierarchical
|
1805.12183
| null | null |
Context Exploitation using Hierarchical Bayesian Models
|
We consider the problem of how to improve automatic target recognition by
fusing the naive sensor-level classification decisions with "intuition," or
context, in a mathematically principled way. This is a general approach that is
compatible with many definitions of context, but for specificity, we consider
context as co-occurrence in imagery. In particular, we consider images that
contain multiple objects identified at various confidence levels. We learn the
patterns of co-occurrence in each context, then use these patterns as
hyper-parameters for a Hierarchical Bayesian Model. The result is that
low-confidence sensor classification decisions can be dramatically improved by
fusing those readings with context. We further use hyperpriors to address the
case where multiple contexts may be appropriate. We also consider the Bayesian
Network, an alternative to the Hierarchical Bayesian Model, which is
computationally more efficient but assumes that context and sensor readings are
uncorrelated.
| null |
http://arxiv.org/abs/1805.12183v1
|
http://arxiv.org/pdf/1805.12183v1.pdf
| null |
[
"Christopher A. George",
"Pranab Banerjee",
"Kendra E. Moore"
] |
[
"General Classification",
"Specificity"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.