paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/learning-representations-and-generative
|
1707.02392
| null |
BJInEZsTb
|
Learning Representations and Generative Models for 3D Point Clouds
|
Three-dimensional geometric data offer an excellent domain for studying
representation learning and generative modeling. In this paper, we look at
geometric data represented as point clouds. We introduce a deep AutoEncoder
(AE) network with state-of-the-art reconstruction quality and generalization
ability. The learned representations outperform existing methods on 3D
recognition tasks and enable shape editing via simple algebraic manipulations,
such as semantic part editing, shape analogies and shape interpolation, as well
as shape completion. We perform a thorough study of different generative models
including GANs operating on the raw point clouds, significantly improved GANs
trained in the fixed latent space of our AEs, and Gaussian Mixture Models
(GMMs). To quantitatively evaluate generative models we introduce measures of
sample fidelity and diversity based on matchings between sets of point clouds.
Interestingly, our evaluation of generalization, fidelity and diversity reveals
that GMMs trained in the latent space of our AEs yield the best results
overall.
|
Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling.
|
http://arxiv.org/abs/1707.02392v3
|
http://arxiv.org/pdf/1707.02392v3.pdf
|
ICML 2018 7
|
[
"Panos Achlioptas",
"Olga Diamanti",
"Ioannis Mitliagkas",
"Leonidas Guibas"
] |
[
"Diversity",
"Representation Learning"
] | 2017-07-08T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1917
|
http://proceedings.mlr.press/v80/achlioptas18a/achlioptas18a.pdf
|
learning-representations-and-generative-2
| null |
[] |
https://paperswithcode.com/paper/iso-standard-domain-independent-dialogue-act
|
1806.04327
| null | null |
ISO-Standard Domain-Independent Dialogue Act Tagging for Conversational Agents
|
Dialogue Act (DA) tagging is crucial for spoken language understanding
systems, as it provides a general representation of speakers' intents, not
bound to a particular dialogue system. Unfortunately, publicly available data
sets with DA annotation are all based on different annotation schemes and thus
incompatible with each other. Moreover, their schemes often do not cover all
aspects necessary for open-domain human-machine interaction. In this paper, we
propose a methodology to map several publicly available corpora to a subset of
the ISO standard, in order to create a large task-independent training corpus
for DA classification. We show the feasibility of using this corpus to train a
domain-independent DA tagger testing it on out-of-domain conversational data,
and argue the importance of training on multiple corpora to achieve robustness
across different DA categories.
|
Dialogue Act (DA) tagging is crucial for spoken language understanding systems, as it provides a general representation of speakers' intents, not bound to a particular dialogue system.
|
http://arxiv.org/abs/1806.04327v1
|
http://arxiv.org/pdf/1806.04327v1.pdf
|
COLING 2018 8
|
[
"Stefano Mezza",
"Alessandra Cervone",
"Giuliano Tortoreto",
"Evgeny A. Stepanov",
"Giuseppe Riccardi"
] |
[
"General Classification",
"Spoken Language Understanding"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1300
|
https://aclanthology.org/C18-1300.pdf
|
iso-standard-domain-independent-dialogue-act-2
| null |
[] |
https://paperswithcode.com/paper/differentiable-compositional-kernel-learning
|
1806.04326
| null | null |
Differentiable Compositional Kernel Learning for Gaussian Processes
|
The generalization properties of Gaussian processes depend heavily on the
choice of kernel, and this choice remains a dark art. We present the Neural
Kernel Network (NKN), a flexible family of kernels represented by a neural
network. The NKN architecture is based on the composition rules for kernels, so
that each unit of the network corresponds to a valid kernel. It can compactly
approximate compositional kernel structures such as those used by the Automatic
Statistician (Lloyd et al., 2014), but because the architecture is
differentiable, it is end-to-end trainable with gradient-based optimization. We
show that the NKN is universal for the class of stationary kernels. Empirically
we demonstrate pattern discovery and extrapolation abilities of NKN on several
tasks that depend crucially on identifying the underlying structure, including
time series and texture extrapolation, as well as Bayesian optimization.
|
The NKN architecture is based on the composition rules for kernels, so that each unit of the network corresponds to a valid kernel.
|
http://arxiv.org/abs/1806.04326v3
|
http://arxiv.org/pdf/1806.04326v3.pdf
|
ICML 2018 7
|
[
"Shengyang Sun",
"Guodong Zhang",
"Chaoqi Wang",
"Wenyuan Zeng",
"Jiaman Li",
"Roger Grosse"
] |
[
"Bayesian Optimization",
"Gaussian Processes",
"Time Series",
"Time Series Analysis",
"valid"
] | 2018-06-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2378
|
http://proceedings.mlr.press/v80/sun18e/sun18e.pdf
|
differentiable-compositional-kernel-learning-1
| null |
[] |
https://paperswithcode.com/paper/augmenting-stream-constraint-programming-with
|
1806.04325
| null | null |
Augmenting Stream Constraint Programming with Eventuality Conditions
|
Stream constraint programming is a recent addition to the family of
constraint programming frameworks, where variable domains are sets of infinite
streams over finite alphabets. Previous works showed promising results for its
applicability to real-world planning and control problems. In this paper,
motivated by the modelling of planning applications, we improve the
expressiveness of the framework by introducing 1) the "until" constraint, a new
construct that is adapted from Linear Temporal Logic and 2) the @ operator on
streams, a syntactic sugar for which we provide a more efficient solving
algorithm over simple desugaring. For both constructs, we propose corresponding
novel solving algorithms and prove their correctness. We present competitive
experimental results on the Missionaries and Cannibals logic puzzle and a
standard path planning application on the grid, by comparing with Apt and
Brand's method for verifying eventuality conditions using a CP approach.
| null |
http://arxiv.org/abs/1806.04325v2
|
http://arxiv.org/pdf/1806.04325v2.pdf
| null |
[
"Jasper C. H. Lee",
"Jimmy H. M. Lee",
"Allen Z. Zhong"
] |
[] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/end-to-end-learning-of-energy-constrained
|
1806.04321
| null |
BylBr3C9K7
|
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
|
Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving methods, our framework trains DNNs that provide higher accuracies under same or lower energy budgets. Code is publicly available.
|
Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time.
|
https://arxiv.org/abs/1806.04321v3
|
https://arxiv.org/pdf/1806.04321v3.pdf
|
ICLR 2019 5
|
[
"Haichuan Yang",
"Yuhao Zhu",
"Ji Liu"
] |
[] | 2018-06-12T00:00:00 |
https://openreview.net/forum?id=BylBr3C9K7
|
https://openreview.net/pdf?id=BylBr3C9K7
|
energy-constrained-compression-for-deep
| null |
[] |
https://paperswithcode.com/paper/support-vector-machine-application-for
|
1806.05054
| null | null |
Support Vector Machine Application for Multiphase Flow Pattern Prediction
|
In this paper a data analytical approach featuring support vector machines
(SVM) is employed to train a predictive model over an experimentaldataset,
which consists of the most relevant studies for two-phase flow pattern
prediction. The database for this study consists of flow patterns or flow
regimes in gas-liquid two-phase flow. The term flow pattern refers to the
geometrical configuration of the gas and liquid phases in the pipe. When gas
and liquid flow simultaneously in a pipe, the two phases can distribute
themselves in a variety of flow configurations. Gas-liquid two-phase flow
occurs ubiquitously in various major industrial fields: petroleum, chemical,
nuclear, and geothermal industries. The flow configurations differ from each
other in the spatial distribution of the interface, resulting in different flow
characteristics. Experimental results obtained by applying the presented
methodology to different combinations of flow patterns demonstrate that the
proposed approach is state-of-the-art alternatives by achieving 97% correct
classification. The results suggest machine learning could be used as an
effective tool for automatic detection and classification of gas-liquid flow
patterns.
| null |
http://arxiv.org/abs/1806.05054v1
|
http://arxiv.org/pdf/1806.05054v1.pdf
| null |
[
"Pablo Guillen-Rondon",
"Melvin D. Robinson",
"Carlos Torres",
"Eduardo Pereya"
] |
[
"General Classification",
"Prediction"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/embedding-text-in-hyperbolic-spaces
|
1806.04313
| null | null |
Embedding Text in Hyperbolic Spaces
|
Natural language text exhibits hierarchical structure in a variety of
respects. Ideally, we could incorporate our prior knowledge of this
hierarchical structure into unsupervised learning algorithms that work on text
data. Recent work by Nickel & Kiela (2017) proposed using hyperbolic instead of
Euclidean embedding spaces to represent hierarchical data and demonstrated
encouraging results when embedding graphs. In this work, we extend their method
with a re-parameterization technique that allows us to learn hyperbolic
embeddings of arbitrarily parameterized objects. We apply this framework to
learn word and sentence embeddings in hyperbolic space in an unsupervised
manner from text corpora. The resulting embeddings seem to encode certain
intuitive notions of hierarchy, such as word-context frequency and phrase
constituency. However, the implicit continuous hierarchy in the learned
hyperbolic space makes interrogating the model's learned hierarchies more
difficult than for models that learn explicit edges between items. The learned
hyperbolic embeddings show improvements over Euclidean embeddings in some --
but not all -- downstream tasks, suggesting that hierarchical organization is
more useful for some tasks than others.
| null |
http://arxiv.org/abs/1806.04313v1
|
http://arxiv.org/pdf/1806.04313v1.pdf
|
WS 2018 6
|
[
"Bhuwan Dhingra",
"Christopher J. Shallue",
"Mohammad Norouzi",
"Andrew M. Dai",
"George E. Dahl"
] |
[
"Sentence",
"Sentence Embeddings"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/W18-1708
|
https://aclanthology.org/W18-1708.pdf
|
embedding-text-in-hyperbolic-spaces-1
| null |
[] |
https://paperswithcode.com/paper/pixels-voxels-and-views-a-study-of-shape
|
1804.06032
| null | null |
Pixels, voxels, and views: A study of shape representations for single view 3D object shape prediction
|
The goal of this paper is to compare surface-based and volumetric 3D object
shape representations, as well as viewer-centered and object-centered reference
frames for single-view 3D shape prediction. We propose a new algorithm for
predicting depth maps from multiple viewpoints, with a single depth or RGB
image as input. By modifying the network and the way models are evaluated, we
can directly compare the merits of voxels vs. surfaces and viewer-centered vs.
object-centered for familiar vs. unfamiliar objects, as predicted from RGB or
depth images. Among our findings, we show that surface-based methods outperform
voxel representations for objects from novel classes and produce higher
resolution outputs. We also find that using viewer-centered coordinates is
advantageous for novel objects, while object-centered representations are
better for more familiar objects. Interestingly, the coordinate frame
significantly affects the shape representation learned, with object-centered
placing more importance on implicitly recognizing the object category and
viewer-centered producing shape representations with less dependence on
category recognition.
| null |
http://arxiv.org/abs/1804.06032v2
|
http://arxiv.org/pdf/1804.06032v2.pdf
|
CVPR 2018 6
|
[
"Daeyun Shin",
"Charless C. Fowlkes",
"Derek Hoiem"
] |
[
"Object"
] | 2018-04-17T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Shin_Pixels_Voxels_and_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Shin_Pixels_Voxels_and_CVPR_2018_paper.pdf
|
pixels-voxels-and-views-a-study-of-shape-1
| null |
[] |
https://paperswithcode.com/paper/efficient-end-to-end-learning-for-quantizable
|
1805.05809
| null | null |
Efficient end-to-end learning for quantizable representations
|
Embedding representation learning via neural networks is at the core
foundation of modern similarity based search. While much effort has been put in
developing algorithms for learning binary hamming code representations for
search efficiency, this still requires a linear scan of the entire dataset per
each query and trades off the search accuracy through binarization. To this
end, we consider the problem of directly learning a quantizable embedding
representation and the sparse binary hash code end-to-end which can be used to
construct an efficient hash table not only providing significant search
reduction in the number of data but also achieving the state of the art search
accuracy outperforming previous state of the art deep metric learning methods.
We also show that finding the optimal sparse binary hash code in a mini-batch
can be computed exactly in polynomial time by solving a minimum cost flow
problem. Our results on Cifar-100 and on ImageNet datasets show the state of
the art search accuracy in precision@k and NMI metrics while providing up to
98X and 478X search speedup respectively over exhaustive linear search. The
source code is available at
https://github.com/maestrojeong/Deep-Hash-Table-ICML18
|
To this end, we consider the problem of directly learning a quantizable embedding representation and the sparse binary hash code end-to-end which can be used to construct an efficient hash table not only providing significant search reduction in the number of data but also achieving the state of the art search accuracy outperforming previous state of the art deep metric learning methods.
|
http://arxiv.org/abs/1805.05809v3
|
http://arxiv.org/pdf/1805.05809v3.pdf
|
ICML 2018 7
|
[
"Yeonwoo Jeong",
"Hyun Oh Song"
] |
[
"Binarization",
"Metric Learning",
"Representation Learning"
] | 2018-05-15T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2123
|
http://proceedings.mlr.press/v80/jeong18a/jeong18a.pdf
|
efficient-end-to-end-learning-for-quantizable-1
| null |
[] |
https://paperswithcode.com/paper/mission-ultra-large-scale-feature-selection
|
1806.04310
| null | null |
MISSION: Ultra Large-Scale Feature Selection using Count-Sketches
|
Feature selection is an important challenge in machine learning. It plays a
crucial role in the explainability of machine-driven decisions that are rapidly
permeating throughout modern society. Unfortunately, the explosion in the size
and dimensionality of real-world datasets poses a severe challenge to standard
feature selection algorithms. Today, it is not uncommon for datasets to have
billions of dimensions. At such scale, even storing the feature vector is
impossible, causing most existing feature selection methods to fail.
Workarounds like feature hashing, a standard approach to large-scale machine
learning, helps with the computational feasibility, but at the cost of losing
the interpretability of features. In this paper, we present MISSION, a novel
framework for ultra large-scale feature selection that performs stochastic
gradient descent while maintaining an efficient representation of the features
in memory using a Count-Sketch data structure. MISSION retains the simplicity
of feature hashing without sacrificing the interpretability of the features
while using only O(log^2(p)) working memory. We demonstrate that MISSION
accurately and efficiently performs feature selection on real-world,
large-scale datasets with billions of dimensions.
|
We demonstrate that MISSION accurately and efficiently performs feature selection on real-world, large-scale datasets with billions of dimensions.
|
http://arxiv.org/abs/1806.04310v1
|
http://arxiv.org/pdf/1806.04310v1.pdf
| null |
[
"Amirali Aghazadeh",
"Ryan Spring",
"Daniel Lejeune",
"Gautam Dasarathy",
"Anshumali Shrivastava",
"Richard G. Baraniuk"
] |
[
"BIG-bench Machine Learning",
"feature selection"
] | 2018-06-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Interpretability",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Interpretability",
"source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression",
"source_url": "http://arxiv.org/abs/1310.1533v2"
}
] |
https://paperswithcode.com/paper/differentially-private-matrix-completion
|
1712.09765
| null | null |
Differentially Private Matrix Completion Revisited
|
We provide the first provably joint differentially private algorithm with
formal utility guarantees for the problem of user-level privacy-preserving
collaborative filtering. Our algorithm is based on the Frank-Wolfe method, and
it consistently estimates the underlying preference matrix as long as the
number of users $m$ is $\omega(n^{5/4})$, where $n$ is the number of items, and
each user provides her preference for at least $\sqrt{n}$ randomly selected
items. Along the way, we provide an optimal differentially private algorithm
for singular vector computation, based on the celebrated Oja's method, that
provides significant savings in terms of space and time while operating on
sparse matrices. We also empirically evaluate our algorithm on a suite of
datasets, and show that it consistently outperforms the state-of-the-art
private algorithms.
| null |
http://arxiv.org/abs/1712.09765v2
|
http://arxiv.org/pdf/1712.09765v2.pdf
|
ICML 2018 7
|
[
"Prateek Jain",
"Om Thakkar",
"Abhradeep Thakurta"
] |
[
"Collaborative Filtering",
"Matrix Completion",
"Privacy Preserving"
] | 2017-12-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2400
|
http://proceedings.mlr.press/v80/jain18b/jain18b.pdf
|
differentially-private-matrix-completion-1
| null |
[] |
https://paperswithcode.com/paper/diverse-online-feature-selection
|
1806.04308
| null | null |
Diverse Online Feature Selection
|
Online feature selection has been an active research area in recent years. We
propose a novel diverse online feature selection method based on Determinantal
Point Processes (DPP). Our model aims to provide diverse features which can be
composed in either a supervised or unsupervised framework. The framework aims
to promote diversity based on the kernel produced on a feature level, through
at most three stages: feature sampling, local criteria and global criteria for
feature selection. In the feature sampling, we sample incoming stream of
features using conditional DPP. The local criteria is used to assess and select
streamed features (i.e. only when they arrive), we use unsupervised scale
invariant methods to remove redundant features and optionally supervised
methods to introduce label information to assess relevant features. Lastly, the
global criteria uses regularization methods to select a global optimal subset
of features. This three stage procedure continues until there are no more
features arriving or some predefined stopping condition is met. We demonstrate
based on experiments conducted on that this approach yields better compactness,
is comparable and in some instances outperforms other state-of-the-art online
feature selection methods.
|
The framework aims to promote diversity based on the kernel produced on a feature level, through at most three stages: feature sampling, local criteria and global criteria for feature selection.
|
http://arxiv.org/abs/1806.04308v3
|
http://arxiv.org/pdf/1806.04308v3.pdf
| null |
[
"Chapman Siu",
"Richard Yi Da Xu"
] |
[
"Diversity",
"feature selection",
"Point Processes"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-blur-mapping-exploiting-high-level
|
1612.01227
| null | null |
Deep Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks
|
The human visual system excels at detecting local blur of visual images, but
the underlying mechanism is not well understood. Traditional views of blur such
as reduction in energy at high frequencies and loss of phase coherence at
localized features have fundamental limitations. For example, they cannot well
discriminate flat regions from blurred ones. Here we propose that high-level
semantic information is critical in successfully identifying local blur.
Therefore, we resort to deep neural networks that are proficient at learning
high-level features and propose the first end-to-end local blur mapping
algorithm based on a fully convolutional network. By analyzing various
architectures with different depths and design philosophies, we empirically
show that high-level features of deeper layers play a more important role than
low-level features of shallower layers in resolving challenging ambiguities for
this task. We test the proposed method on a standard blur detection benchmark
and demonstrate that it significantly advances the state-of-the-art (ODS
F-score of 0.853). Furthermore, we explore the use of the generated blur maps
in three applications, including blur region segmentation, blur degree
estimation, and blur magnification.
| null |
http://arxiv.org/abs/1612.01227v2
|
http://arxiv.org/pdf/1612.01227v2.pdf
| null |
[
"Kede Ma",
"Huan Fu",
"Tongliang Liu",
"Zhou Wang",
"DaCheng Tao"
] |
[
"Vocal Bursts Intensity Prediction"
] | 2016-12-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/model-free-information-extraction-in-enriched
|
1804.05170
| null | null |
Model-Free Information Extraction in Enriched Nonlinear Phase-Space
|
Detecting anomalies and discovering driving signals is an essential component
of scientific research and industrial practice. Often the underlying mechanism
is highly complex, involving hidden evolving nonlinear dynamics and noise
contamination. When representative physical models and large labeled data sets
are unavailable, as is the case with most real-world applications,
model-dependent Bayesian approaches would yield misleading results, and most
supervised learning machines would also fail to reliably resolve the
intricately evolving systems. Here, we propose an unsupervised machine-learning
approach that operates in a well-constructed function space, whereby the
evolving nonlinear dynamics are captured through a linear functional
representation determined by the Koopman operator. This breakthrough leverages
on the time-feature embedding and the ensuing reconstruction of a phase-space
representation of the dynamics, thereby permitting the reliable identification
of critical global signatures from the whole trajectory. This dramatically
improves over commonly used static local features, which are vulnerable to
unknown transitions or noise. Thanks to its data-driven nature, our method
excludes any prior models and training corpus. We benchmark the astonishing
accuracy of our method on three diverse and challenging problems in: biology,
medicine, and engineering. In all cases, it outperforms existing
state-of-the-art methods. As a new unsupervised information processing
paradigm, it is suitable for ubiquitous nonlinear dynamical systems or
end-users with little expertise, which permits an unbiased excavation of
underlying working principles or intrinsic correlations submerged in unlabeled
data flows.
| null |
http://arxiv.org/abs/1804.05170v2
|
http://arxiv.org/pdf/1804.05170v2.pdf
| null |
[
"Bin Li",
"Yueheng Lan",
"Weisi Guo",
"Chenglin Zhao"
] |
[] | 2018-04-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-a-discriminative-filter-bank-within
|
1611.09932
| null | null |
Learning a Discriminative Filter Bank within a CNN for Fine-grained Recognition
|
Compared to earlier multistage frameworks using CNN features, recent
end-to-end deep approaches for fine-grained recognition essentially enhance the
mid-level learning capability of CNNs. Previous approaches achieve this by
introducing an auxiliary network to infuse localization information into the
main classification network, or a sophisticated feature encoding method to
capture higher order feature statistics. We show that mid-level representation
learning can be enhanced within the CNN framework, by learning a bank of
convolutional filters that capture class-specific discriminative patches
without extra part or bounding box annotations. Such a filter bank is well
structured, properly initialized and discriminatively learned through a novel
asymmetric multi-stream architecture with convolutional filter supervision and
a non-random layer initialization. Experimental results show that our approach
achieves state-of-the-art on three publicly available fine-grained recognition
datasets (CUB-200-2011, Stanford Cars and FGVC-Aircraft). Ablation studies and
visualizations are provided to understand our approach.
|
Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for fine-grained recognition essentially enhance the mid-level learning capability of CNNs.
|
http://arxiv.org/abs/1611.09932v3
|
http://arxiv.org/pdf/1611.09932v3.pdf
|
CVPR 2018 6
|
[
"Yaming Wang",
"Vlad I. Morariu",
"Larry S. Davis"
] |
[
"Representation Learning"
] | 2016-11-29T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Learning_a_Discriminative_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Learning_a_Discriminative_CVPR_2018_paper.pdf
|
learning-a-discriminative-filter-bank-within-1
| null |
[] |
https://paperswithcode.com/paper/findings-of-the-second-workshop-on-neural
|
1806.02940
| null | null |
Findings of the Second Workshop on Neural Machine Translation and Generation
|
This document describes the findings of the Second Workshop on Neural Machine
Translation and Generation, held in concert with the annual conference of the
Association for Computational Linguistics (ACL 2018). First, we summarize the
research trends of papers presented in the proceedings, and note that there is
particular interest in linguistic structure, domain adaptation, data
augmentation, handling inadequate resources, and analysis of models. Second, we
describe the results of the workshop's shared task on efficient neural machine
translation, where participants were tasked with creating MT systems that are
both accurate and efficient.
| null |
http://arxiv.org/abs/1806.02940v3
|
http://arxiv.org/pdf/1806.02940v3.pdf
|
WS 2018 7
|
[
"Alexandra Birch",
"Andrew Finch",
"Minh-Thang Luong",
"Graham Neubig",
"Yusuke Oda"
] |
[
"Data Augmentation",
"Domain Adaptation",
"Machine Translation",
"Translation"
] | 2018-06-08T00:00:00 |
https://aclanthology.org/W18-2701
|
https://aclanthology.org/W18-2701.pdf
|
findings-of-the-second-workshop-on-neural-1
| null |
[] |
https://paperswithcode.com/paper/object-detection-and-tracking-benchmark-in
|
1806.03853
| null | null |
Object detection and tracking benchmark in industry based on improved correlation filter
|
Real-time object detection and tracking have shown to be the basis of
intelligent production for industrial 4.0 applications. It is a challenging
task because of various distorted data in complex industrial setting. The
correlation filter (CF) has been used to trade off the low-cost computation and
high performance. However, traditional CF training strategy can not get
satisfied performance for the various industrial data; because the simple
sampling(bagging) during training process will not find the exact solutions in
a data space with a large diversity. In this paper, we propose
Dijkstra-distance based correlation filters (DBCF), which establishes a new
learning framework that embeds distribution-related constraints into the
multi-channel correlation filters (MCCF). DBCF is able to handle the huge
variations existing in the industrial data by improving those constraints based
on the shortest path among all solutions. To evaluate DBCF, we build a new
dataset as the benchmark for industrial 4.0 application. Extensive experiments
demonstrate that DBCF produces high performance and exceeds the
state-of-the-art methods. The dataset and source code can be found at
https://github.com/bczhangbczhang
| null |
http://arxiv.org/abs/1806.03853v2
|
http://arxiv.org/pdf/1806.03853v2.pdf
| null |
[
"Shangzhen Luan",
"Yan Li",
"Xiaodi Wang",
"Baochang Zhang"
] |
[
"Diversity",
"object-detection",
"Object Detection",
"Real-Time Object Detection"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pseudo-task-augmentation-from-deep-multitask
|
1803.04062
| null | null |
Pseudo-task Augmentation: From Deep Multitask Learning to Intratask Sharing---and Back
|
Deep multitask learning boosts performance by sharing learned structure
across related tasks. This paper adapts ideas from deep multitask learning to
the setting where only a single task is available. The method is formalized as
pseudo-task augmentation, in which models are trained with multiple decoders
for each task. Pseudo-tasks simulate the effect of training towards
closely-related tasks drawn from the same universe. In a suite of experiments,
pseudo-task augmentation is shown to improve performance on single-task
learning problems. When combined with multitask learning, further improvements
are achieved, including state-of-the-art performance on the CelebA dataset,
showing that pseudo-task augmentation and multitask learning have complementary
value. All in all, pseudo-task augmentation is a broadly applicable and
efficient way to boost performance in deep learning systems.
| null |
http://arxiv.org/abs/1803.04062v2
|
http://arxiv.org/pdf/1803.04062v2.pdf
|
ICML 2018
|
[
"Elliot Meyerson",
"Risto Miikkulainen"
] |
[] | 2018-03-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/challenges-of-language-technologies-for-the
|
1806.04291
| null | null |
Challenges of language technologies for the indigenous languages of the Americas
|
Indigenous languages of the American continent are highly diverse. However,
they have received little attention from the technological perspective. In this
paper, we review the research, the digital resources and the available NLP
systems that focus on these languages. We present the main challenges and
research questions that arise when distant languages and low-resource scenarios
are faced. We would like to encourage NLP research in linguistically rich and
diverse areas like the Americas.
|
Indigenous languages of the American continent are highly diverse.
|
http://arxiv.org/abs/1806.04291v1
|
http://arxiv.org/pdf/1806.04291v1.pdf
|
COLING 2018 8
|
[
"Manuel Mager",
"Ximena Gutierrez-Vasques",
"Gerardo Sierra",
"Ivan Meza"
] |
[] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1006
|
https://aclanthology.org/C18-1006.pdf
|
challenges-of-language-technologies-for-the-2
| null |
[
{
"code_snippet_url": "",
"description": "The Complete Guide USA To Contacting American Airlines Customer Service Number Explained\r\n\r\nAmerican Airlines™ main customer service number is 1-800-American Airlines™ or ((+1⇨858⇨25o⇨2740 }}[US-American Airlines™] or ((+1⇨858⇨25o⇨2740 }}[UK-American Airlines™] OTA (Live Person), available 24/7. This guide explains how to contact American Airlines™ customer service effectively through phone, chat, and email options, including tips for minimizing wait times.\r\n\r\nWhy Contact a Live Person at American Airlines™? \r\n\r\nFlight changes or cancellations: Get help adjusting or canceling flights.\r\n\r\nBooking clarification: Assistance with understanding your booking details.\r\n\r\nRefunds and compensation: Live agents can help with complex cases.\r\n\r\nTechnical glitches: Resolve booking or payment issues quickly.\r\n\r\nAmerican Airlines™ Contact Options \r\n\r\nThere are several ways to contact American Airlines™ customer service:\r\n\r\nPhone: Call ((+1⇨858⇨25o⇨2740 }}and follow the prompts or press “0” to reach an agent.\r\n\r\nLive Chat: Go to American Airlines™’ website Help section to chat with an agent ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSocial Media: Reach out via Twitter or Facebook for quick replies.\r\n\r\nMobile App: Use the app to contact support via chat or call.\r\n\r\nEmail: Use email for less urgent matters and to keep written documentation.\r\n\r\nStep-by-Step: Talking to a Live Person at American Airlines™ \r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}, select the most relevant option, or say “agent” to connect faster. You can usually press “0” to bypass prompts.\r\n\r\nImportant Numbers for International Callers \r\n\r\nUS: +1⇨858⇨25o⇨2740 \r\n\r\nCanada: +1⇨858⇨25o⇨2740 \r\n\r\nAustralia: +1⇨858⇨25o⇨2740 \r\n\r\nEspañol: +1⇨858⇨25o⇨2740 \r\n\r\nCommon Customer Service Queries \r\n\r\nFlight Changes & Cancellations: Modify or cancel your booking with assistance at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\n𝚑𝚘𝚝𝚎𝚕 Bookings: Resolve issues like incorrect dates or reservation problems.\r\n\r\nRefunds & Compensation: Ensure your claims are managed correctly.\r\n\r\nFrequently Asked Questions \r\n\r\nQ: What is the fastest way to reach a live agent at American Airlines™?\r\n\r\nA: Call ((+1⇨858⇨25o⇨2740 }}or use live chat via the website/app.\r\n\r\nQ: Can I get help with accessibility or special needs?\r\n\r\nA: Yes, American Airlines™ offers accessibility support for medical or disability needs.\r\n\r\nQ: How long does it take to get an email response?\r\n\r\nA: Usually a few business days, depending on the issue.\r\n\r\nQ: Is American Airlines™ support available 24/7?\r\n\r\nA: Yes, many contact methods including phone ((+1⇨858⇨25o⇨2740 }}and chat are available 24/7.\r\n\r\nYou can contact American Airlines™ customer service ((+1⇨858⇨25o⇨2740 }}through several methods. The fastest way is by calling 1-800-American Airlines (((+1⇨858⇨25o⇨2740 }}). You can also use the chat feature on the American Airlines app or website. For social media support, message them on Twitter or Facebook. If you prefer email, submit a form through their official website. Additionally, you can visit their ticket counters or service desks at the airport for in-person assistance.\r\n\r\nLearn how to contact American Airlines customer service ((+1⇨858⇨25o⇨2740 }}by phone, chat, email or social media for any queries related to flights, refund, cancel and more. Find the official website, contact number and FAQs for American Airlines™ in the U.S.\r\n\r\nCall the Main American Airlines Customer Service Number:\r\n\r\nThe easiest and most common way to contact American Airlines™ is through their main customer service number: +1⇨858⇨25o⇨2740 \r\n\r\nAmerican Airlines™ Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps to Speak to a Representative: +1⇨858⇨25o⇨2740 \r\n\r\nDial ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nListen to the automated menu options.\r\n\r\nPress the appropriate number for your inquiry (e.g., reservations, flight status, baggage claim, etc.).\r\n\r\nHold the line until a live representative becomes available.\r\n\r\nExplain your concern and receive assistance.\r\n\r\nContact American Airlines™ Rapid Rewards Customer Service \r\n\r\nIf you are a Rapid Rewards member and need assistance with points, travel rewards, or account-related issues, contact the Rapid Rewards customer service line.\r\n\r\nRapid Rewards Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall +1⇨858⇨25o⇨2740 \r\n\r\nProvide your Rapid Rewards account number when prompted.\r\n\r\nFollow the automated menu to reach an agent.\r\n\r\nDiscuss your issue or inquiry with the representative.\r\n\r\nCall American Airlines™ Baggage Service Office \r\n\r\nIf your luggage is lost, damaged, or delayed, you can contact American Airlines™ Baggage Service\r\n\r\nBaggage Service Phone Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the appropriate option for lost, delayed, or damaged baggage.\r\n\r\nProvide your flight and baggage claim details.\r\n\r\nSpeak to a representative for assistance.\r\n\r\nAmerican Airlines™ Customer Service for Group Travel\r\n\r\nFor group reservations (10 or more passengers), a dedicated support line is available.\r\n\r\nGroup Travel Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nDial ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for group reservations.\r\n\r\nSpeak to an agent regarding booking, changes, or special requests.\r\n\r\nReach Out to American Airlines™ Vacations Customer Service\r\n\r\nFor vacation packages, including 𝚑𝚘𝚝𝚎𝚕s and car rentals, call the vacation service line.\r\n\r\nVacations Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the appropriate option for new reservations or modifications.\r\n\r\nDiscuss your vacation plans with a representative.\r\n\r\nCall American Airlines™ Cargo Customer Service\r\n\r\nIf you are shipping cargo, you can contact the cargo department for assistance.\r\n\r\nCargo Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nProvide details about your shipment.\r\n\r\nSpeak with a representative for assistance.\r\n\r\nContact American Airlines™ for Special Assistance\r\n\r\nFor passengers with disabilities or special needs, American Airlines™ offers a dedicated support line.\r\n\r\nSpecial Assistance Phone Number: ((+1⇨858⇨25o⇨2740 }}(same as the main number)\r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for special assistance.\r\n\r\nSpeak with an agent about your needs.\r\n\r\nCall the American Airlines™ Refund Department\r\n\r\nIf you need to request a refund, call the refund department directly.\r\n\r\nRefunds Customer Service Number: ((+1⇨858⇨25o⇨2740 }}(main number, follow refund prompts)\r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nSelect the option for refund inquiries.\r\n\r\nProvide your booking details.\r\n\r\nDiscuss refund eligibility with a representative.\r\n\r\nContact American Airlines™ Corporate Customer Service\r\n\r\nFor corporate inquiries, media requests, or other non-passenger-related concerns, use the corporate office number.\r\n\r\nCorporate Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nCall ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nFollow the menu prompts for corporate inquiries.\r\n\r\nSpeak to an appropriate representative.\r\n\r\nUse the American Airlines™ International Customer Service Line\r\n\r\nFor international travel inquiries, American Airlines™ provides dedicated support.\r\n\r\nInternational Customer Service Number: +1⇨858⇨25o⇨2740 \r\n\r\nSteps:\r\n\r\nDial +1⇨858⇨25o⇨2740 \r\n\r\nSelect the option for international travel.\r\n\r\nSpeak with a representative for assistance\r\n\r\nFAQs about American Airlines™ Customer Service\r\n\r\nIs American Airlines™ Customer Service available 24 hours?\r\nYes, the general customer service line (((+1⇨858⇨25o⇨2740 }}) is available 24/7 for assistance\r\n\r\nHow do I speak to a live American Airlines™ representative?\r\nCall ((+1⇨858⇨25o⇨2740 }}, follow the prompts, and select the option to speak with an agent.\r\n\r\nWhat is the 800 number for American Airlines™?\r\nThe main toll-free number is ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nDoes American Airlines™ have a different number for Rapid Rewards members?\r\nYes, Rapid Rewards Customer Service can be reached at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nHow can I contact American Airlines™ for baggage issues?\r\nCall the Baggage Service Office at ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nCan I contact American Airlines™ for a refund request?\r\nYes, call ((+1⇨858⇨25o⇨2740 }}and select the refund option.\r\n\r\nIs there a dedicated line for international travel inquiries?\r\nYes, international customers can call ((+1⇨858⇨25o⇨2740 }}and follow the prompts for assistance.\r\n\r\nWhat number should I call for special assistance requests?\r\nPassengers needing special assistance can call ((+1⇨858⇨25o⇨2740 }}and select the appropriate option.\r\n\r\nHow do I reach American Airlines™ for corporate inquiries?\r\nFor corporate-related concerns, call ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nIs there a different number for American Airlines™ vacation packages?\r\nYes, for vacation package support, call ((+1⇨858⇨25o⇨2740 }}.\r\n\r\nBy following this guide, you can quickly and efficiently connect with American Airlines™ Customer Service for any inquiries or assistance needed.\r\n\r\nConclusion \r\n\r\nAs an American Airlines™ customer ((+1⇨858⇨25o⇨2740 }}, you have several reliable options to connect with support. For the fastest help, keep ((+1⇨858⇨25o⇨2740 }}ready. Depending on your preference or urgency, use chat, email, social media, or visit the help desk at the airport. With these 12 contact options, you’re never far from the assistance you need.",
"full_name": "7 Fastest Ways to Call American Airlines Reservations Number (USA Guide)",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "6D Pose Estimation Models",
"parent": null
},
"name": "American",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
}
] |
https://paperswithcode.com/paper/dpatch-an-adversarial-patch-attack-on-object
|
1806.02299
| null | null |
DPatch: An Adversarial Patch Attack on Object Detectors
|
Object detectors have emerged as an indispensable module in modern computer
vision systems. In this work, we propose DPatch -- a black-box
adversarial-patch-based attack towards mainstream object detectors (i.e. Faster
R-CNN and YOLO). Unlike the original adversarial patch that only manipulates
image-level classifier, our DPatch simultaneously attacks the bounding box
regression and object classification so as to disable their predictions.
Compared to prior works, DPatch has several appealing properties: (1) DPatch
can perform both untargeted and targeted effective attacks, degrading the mAP
of Faster R-CNN and YOLO from 75.10% and 65.7% down to below 1%, respectively.
(2) DPatch is small in size and its attacking effect is location-independent,
making it very practical to implement real-world attacks. (3) DPatch
demonstrates great transferability among different detectors as well as
training datasets. For example, DPatch that is trained on Faster R-CNN can
effectively attack YOLO, and vice versa. Extensive evaluations imply that
DPatch can perform effective attacks under black-box setup, i.e., even without
the knowledge of the attacked network's architectures and parameters.
Successful realization of DPatch also illustrates the intrinsic vulnerability
of the modern detector architectures to such patch-based adversarial attacks.
|
Successful realization of DPatch also illustrates the intrinsic vulnerability of the modern detector architectures to such patch-based adversarial attacks.
|
http://arxiv.org/abs/1806.02299v4
|
http://arxiv.org/pdf/1806.02299v4.pdf
| null |
[
"Xin Liu",
"Huanrui Yang",
"Ziwei Liu",
"Linghao Song",
"Hai Li",
"Yiran Chen"
] |
[
"Object"
] | 2018-06-05T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
}
] |
https://paperswithcode.com/paper/twin-regularization-for-online-speech
|
1804.05374
| null | null |
Twin Regularization for online speech recognition
|
Online speech recognition is crucial for developing natural human-machine
interfaces. This modality, however, is significantly more challenging than
off-line ASR, since real-time/low-latency constraints inevitably hinder the use
of future information, that is known to be very helpful to perform robust
predictions. A popular solution to mitigate this issue consists of feeding
neural acoustic models with context windows that gather some future frames.
This introduces a latency which depends on the number of employed look-ahead
features. This paper explores a different approach, based on estimating the
future rather than waiting for it. Our technique encourages the hidden
representations of a unidirectional recurrent network to embed some useful
information about the future. Inspired by a recently proposed technique called
Twin Networks, we add a regularization term that forces forward hidden states
to be as close as possible to cotemporal backward ones, computed by a "twin"
neural network running backwards in time. The experiments, conducted on a
number of datasets, recurrent architectures, input features, and acoustic
conditions, have shown the effectiveness of this approach. One important
advantage is that our method does not introduce any additional computation at
test time if compared to standard unidirectional recurrent networks.
|
Online speech recognition is crucial for developing natural human-machine interfaces.
|
http://arxiv.org/abs/1804.05374v2
|
http://arxiv.org/pdf/1804.05374v2.pdf
| null |
[
"Mirco Ravanelli",
"Dmitriy Serdyuk",
"Yoshua Bengio"
] |
[
"speech-recognition",
"Speech Recognition"
] | 2018-04-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/iparaphrasing-extracting-visually-grounded
|
1806.04284
| null | null |
iParaphrasing: Extracting Visually Grounded Paraphrases via an Image
|
A paraphrase is a restatement of the meaning of a text in other words.
Paraphrases have been studied to enhance the performance of many natural
language processing tasks. In this paper, we propose a novel task iParaphrasing
to extract visually grounded paraphrases (VGPs), which are different phrasal
expressions describing the same visual concept in an image. These extracted
VGPs have the potential to improve language and image multimodal tasks such as
visual question answering and image captioning. How to model the similarity
between VGPs is the key of iParaphrasing. We apply various existing methods as
well as propose a novel neural network-based method with image attention, and
report the results of the first attempt toward iParaphrasing.
|
These extracted VGPs have the potential to improve language and image multimodal tasks such as visual question answering and image captioning.
|
http://arxiv.org/abs/1806.04284v1
|
http://arxiv.org/pdf/1806.04284v1.pdf
|
COLING 2018 8
|
[
"Chenhui Chu",
"Mayu Otani",
"Yuta Nakashima"
] |
[
"Image Captioning",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2018-06-12T00:00:00 |
https://aclanthology.org/C18-1295
|
https://aclanthology.org/C18-1295.pdf
|
iparaphrasing-extracting-visually-grounded-1
| null |
[] |
https://paperswithcode.com/paper/distance-free-modeling-of-multi-predicate
|
1806.03869
| null | null |
Distance-Free Modeling of Multi-Predicate Interactions in End-to-End Japanese Predicate-Argument Structure Analysis
|
Capturing interactions among multiple predicate-argument structures (PASs) is
a crucial issue in the task of analyzing PAS in Japanese. In this paper, we
propose new Japanese PAS analysis models that integrate the label prediction
information of arguments in multiple PASs by extending the input and last
layers of a standard deep bidirectional recurrent neural network (bi-RNN)
model. In these models, using the mechanisms of pooling and attention, we aim
to directly capture the potential interactions among multiple PASs, without
being disturbed by the word order and distance. Our experiments show that the
proposed models improve the prediction accuracy specifically for cases where
the predicate and argument are in an indirect dependency relation and achieve a
new state of the art in the overall $F_1$ on a standard benchmark corpus.
| null |
http://arxiv.org/abs/1806.03869v2
|
http://arxiv.org/pdf/1806.03869v2.pdf
|
COLING 2018 8
|
[
"Yuichiroh Matsubayashi",
"Kentaro Inui"
] |
[] | 2018-06-11T00:00:00 |
https://aclanthology.org/C18-1009
|
https://aclanthology.org/C18-1009.pdf
|
distance-free-modeling-of-multi-predicate-2
| null |
[] |
https://paperswithcode.com/paper/complete-analysis-of-a-random-forest-model
|
1805.02587
| null | null |
Sharp Analysis of a Simple Model for Random Forests
|
Random forests have become an important tool for improving accuracy in regression and classification problems since their inception by Leo Breiman in 2001. In this paper, we revisit a historically important random forest model originally proposed by Breiman in 2004 and later studied by G\'erard Biau in 2012, where a feature is selected at random and the splits occurs at the midpoint of the node along the chosen feature. If the regression function is Lipschitz and depends only on a small subset of $ S $ out of $ d $ features, we show that, given access to $ n $ observations and properly tuned split probabilities, the mean-squared prediction error is $ O((n(\log n)^{(S-1)/2})^{-\frac{1}{S\log2+1}}) $. This positively answers an outstanding question of Biau about whether the rate of convergence for this random forest model could be improved. Furthermore, by a refined analysis of the approximation and estimation errors for linear models, we show that this rate cannot be improved in general. Finally, we generalize our analysis and improve extant prediction error bounds for another random forest model in which each tree is constructed from subsampled data and the splits are performed at the empirical median along a chosen feature.
| null |
https://arxiv.org/abs/1805.02587v7
|
https://arxiv.org/pdf/1805.02587v7.pdf
| null |
[
"Jason M. Klusowski"
] |
[
"regression"
] | 2018-05-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-nes-music-database-a-multi-instrumental
|
1806.04278
| null | null |
The NES Music Database: A multi-instrumental dataset with expressive performance attributes
|
Existing research on music generation focuses on composition, but often
ignores the expressive performance characteristics required for plausible
renditions of resultant pieces. In this paper, we introduce the Nintendo
Entertainment System Music Database (NES-MDB), a large corpus allowing for
separate examination of the tasks of composition and performance. NES-MDB
contains thousands of multi-instrumental songs composed for playback by the
compositionally-constrained NES audio synthesizer. For each song, the dataset
contains a musical score for four instrument voices as well as expressive
attributes for the dynamics and timbre of each voice. Unlike datasets comprised
of General MIDI files, NES-MDB includes all of the information needed to render
exact acoustic performances of the original compositions. Alongside the
dataset, we provide a tool that renders generated compositions as NES-style
audio by emulating the device's audio processor. Additionally, we establish
baselines for the tasks of composition, which consists of learning the
semantics of composing for the NES synthesizer, and performance, which involves
finding a mapping between a composition and realistic expressive attributes.
|
Existing research on music generation focuses on composition, but often ignores the expressive performance characteristics required for plausible renditions of resultant pieces.
|
http://arxiv.org/abs/1806.04278v1
|
http://arxiv.org/pdf/1806.04278v1.pdf
| null |
[
"Chris Donahue",
"Huanru Henry Mao",
"Julian McAuley"
] |
[
"Music Generation"
] | 2018-06-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalized-zero-shot-learning-via
|
1712.03878
| null | null |
Generalized Zero-Shot Learning via Synthesized Examples
|
We present a generative framework for generalized zero-shot learning where
the training and test classes are not necessarily disjoint. Built upon a
variational autoencoder based architecture, consisting of a probabilistic
encoder and a probabilistic conditional decoder, our model can generate novel
exemplars from seen/unseen classes, given their respective class attributes.
These exemplars can subsequently be used to train any off-the-shelf
classification model. One of the key aspects of our encoder-decoder
architecture is a feedback-driven mechanism in which a discriminator (a
multivariate regressor) learns to map the generated exemplars to the
corresponding class attribute vectors, leading to an improved generator. Our
model's ability to generate and leverage examples from unseen classes to train
the classification model naturally helps to mitigate the bias towards
predicting seen classes in generalized zero-shot learning settings. Through a
comprehensive set of experiments, we show that our model outperforms several
state-of-the-art methods, on several benchmark datasets, for both standard as
well as generalized zero-shot learning.
| null |
http://arxiv.org/abs/1712.03878v5
|
http://arxiv.org/pdf/1712.03878v5.pdf
|
CVPR 2018 6
|
[
"Vinay Kumar Verma",
"Gundeep Arora",
"Ashish Mishra",
"Piyush Rai"
] |
[
"Attribute",
"Decoder",
"General Classification",
"Generalized Zero-Shot Learning",
"Zero-Shot Learning"
] | 2017-12-11T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Verma_Generalized_Zero-Shot_Learning_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Verma_Generalized_Zero-Shot_Learning_CVPR_2018_paper.pdf
|
generalized-zero-shot-learning-via-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/learning-multilingual-topics-from
|
1806.04270
| null | null |
Learning Multilingual Topics from Incomparable Corpus
|
Multilingual topic models enable crosslingual tasks by extracting consistent
topics from multilingual corpora. Most models require parallel or comparable
training corpora, which limits their ability to generalize. In this paper, we
first demystify the knowledge transfer mechanism behind multilingual topic
models by defining an alternative but equivalent formulation. Based on this
analysis, we then relax the assumption of training data required by most
existing models, creating a model that only requires a dictionary for training.
Experiments show that our new method effectively learns coherent multilingual
topics from partially and fully incomparable corpora with limited amounts of
dictionary resources.
| null |
http://arxiv.org/abs/1806.04270v1
|
http://arxiv.org/pdf/1806.04270v1.pdf
| null |
[
"Shudong Hao",
"Michael J. Paul"
] |
[
"Topic Models",
"Transfer Learning"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/accurate-and-robust-neural-networks-for
|
1806.04265
| null | null |
Accurate and Robust Neural Networks for Security Related Applications Exampled by Face Morphing Attacks
|
Artificial neural networks tend to learn only what they need for a task. A
manipulation of the training data can counter this phenomenon. In this paper,
we study the effect of different alterations of the training data, which limit
the amount and position of information that is available for the decision
making. We analyze the accuracy and robustness against semantic and black box
attacks on the networks that were trained on different training data
modifications for the particular example of morphing attacks. A morphing attack
is an attack on a biometric facial recognition system where the system is
fooled to match two different individuals with the same synthetic face image.
Such a synthetic image can be created by aligning and blending images of the
two individuals that should be matched with this image.
| null |
http://arxiv.org/abs/1806.04265v1
|
http://arxiv.org/pdf/1806.04265v1.pdf
| null |
[
"Clemens Seibold",
"Wojciech Samek",
"Anna Hilsmann",
"Peter Eisert"
] |
[
"Decision Making",
"Position"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/smoothed-action-value-functions-for-learning
|
1803.02348
| null | null |
Smoothed Action Value Functions for Learning Gaussian Policies
|
State-action value functions (i.e., Q-values) are ubiquitous in reinforcement
learning (RL), giving rise to popular algorithms such as SARSA and Q-learning.
We propose a new notion of action value defined by a Gaussian smoothed version
of the expected Q-value. We show that such smoothed Q-values still satisfy a
Bellman equation, making them learnable from experience sampled from an
environment. Moreover, the gradients of expected reward with respect to the
mean and covariance of a parameterized Gaussian policy can be recovered from
the gradient and Hessian of the smoothed Q-value function. Based on these
relationships, we develop new algorithms for training a Gaussian policy
directly from a learned smoothed Q-value approximator. The approach is
additionally amenable to proximal optimization by augmenting the objective with
a penalty on KL-divergence from a previous policy. We find that the ability to
learn both a mean and covariance during training leads to significantly
improved results on standard continuous control benchmarks.
| null |
http://arxiv.org/abs/1803.02348v3
|
http://arxiv.org/pdf/1803.02348v3.pdf
|
ICML 2018 7
|
[
"Ofir Nachum",
"Mohammad Norouzi",
"George Tucker",
"Dale Schuurmans"
] |
[
"continuous-control",
"Continuous Control",
"Q-Learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-03-06T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2086
|
http://proceedings.mlr.press/v80/nachum18a/nachum18a.pdf
|
smoothed-action-value-functions-for-learning-1
| null |
[
{
"code_snippet_url": null,
"description": "**Sarsa** is an on-policy TD control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma{Q}\\left(S\\_{t+1}, A\\_{t+1}\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThis update is done after every transition from a nonterminal state $S\\_{t}$. if $S\\_{t+1}$ is terminal, then $Q\\left(S\\_{t+1}, A\\_{t+1}\\right)$ is defined as zero.\r\n\r\nTo design an on-policy control algorithm using Sarsa, we estimate $q\\_{\\pi}$ for a behaviour policy $\\pi$ and then change $\\pi$ towards greediness with respect to $q\\_{\\pi}$.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Sarsa",
"introduced_year": 1994,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "On-Policy TD Control",
"parent": null
},
"name": "Sarsa",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/group-normalization
|
1803.08494
| null | null |
Group Normalization
|
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
|
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
|
http://arxiv.org/abs/1803.08494v3
|
http://arxiv.org/pdf/1803.08494v3.pdf
|
ECCV 2018 9
|
[
"Yuxin Wu",
"Kaiming He"
] |
[
"Object",
"object-detection",
"Object Detection",
"Video Classification"
] | 2018-03-22T00:00:00 |
http://openaccess.thecvf.com/content_ECCV_2018/html/Yuxin_Wu_Group_Normalization_ECCV_2018_paper.html
|
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yuxin_Wu_Group_Normalization_ECCV_2018_paper.pdf
|
group-normalization-1
| null |
[
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": "https://github.com/clcarwin/focal_loss_pytorch/blob/e11e75bad957aecf641db6998a1016204722c1bb/focalloss.py#L6",
"description": "A **Focal Loss** function addresses class imbalance during training in tasks like object detection. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard misclassified examples. It is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. \r\n\r\nFormally, the Focal Loss adds a factor $(1 - p\\_{t})^\\gamma$ to the standard cross entropy criterion. Setting $\\gamma>0$ reduces the relative loss for well-classified examples ($p\\_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable *focusing* parameter $\\gamma \\ge 0$. \r\n\r\n$$ {\\text{FL}(p\\_{t}) = - (1 - p\\_{t})^\\gamma \\log\\left(p\\_{t}\\right)} $$",
"full_name": "Focal Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Focal Loss",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
},
{
"code_snippet_url": "https://github.com/facebookresearch/Detectron/blob/8170b25b425967f8f1c7d715bea3c5b8d9536cd8/detectron/modeling/FPN.py#L117",
"description": "A **Feature Pyramid Network**, or **FPN**, is a feature extractor that takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels, in a fully convolutional fashion. This process is independent of the backbone convolutional architectures. It therefore acts as a generic solution for building feature pyramids inside deep convolutional networks to be used in tasks like object detection.\r\n\r\nThe construction of the pyramid involves a bottom-up pathway and a top-down pathway.\r\n\r\nThe bottom-up pathway is the feedforward computation of the backbone ConvNet, which computes a feature hierarchy consisting of feature maps at several scales with a scaling step of 2. For the feature\r\npyramid, one pyramid level is defined for each stage. The output of the last layer of each stage is used as a reference set of feature maps. For [ResNets](https://paperswithcode.com/method/resnet) we use the feature activations output by each stage’s last [residual block](https://paperswithcode.com/method/residual-block). \r\n\r\nThe top-down pathway hallucinates higher resolution features by upsampling spatially coarser, but semantically stronger, feature maps from higher pyramid levels. These features are then enhanced with features from the bottom-up pathway via lateral connections. Each lateral connection merges feature maps of the same spatial size from the bottom-up pathway and the top-down pathway. The bottom-up feature map is of lower-level semantics, but its activations are more accurately localized as it was subsampled fewer times.",
"full_name": "Feature Pyramid Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Feature Extractors** for object detection are modules used to construct features that can be used for detecting objects. They address issues such as the need to detect multiple-sized objects in an image (and the need to have representations that are suitable for the different scales).",
"name": "Feature Extractors",
"parent": null
},
"name": "FPN",
"source_title": "Feature Pyramid Networks for Object Detection",
"source_url": "http://arxiv.org/abs/1612.03144v2"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/facebookresearch/Detectron/blob/8170b25b425967f8f1c7d715bea3c5b8d9536cd8/detectron/modeling/retinanet_heads.py",
"description": "**RetinaNet** is a one-stage object detection model that utilizes a [focal loss](https://paperswithcode.com/method/focal-loss) function to address class imbalance during training. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. RetinaNet is a single, unified network composed of a *backbone* network and two task-specific *subnetworks*. The backbone is responsible for computing a convolutional feature map over an entire input image and is an off-the-shelf convolutional network. The first subnet performs convolutional object classification on the backbone's output; the second subnet performs convolutional bounding box regression. The two subnetworks feature a simple design that the authors propose specifically for one-stage, dense detection. \r\n\r\nWe can see the motivation for focal loss by comparing with two-stage object detectors. Here class imbalance is addressed by a two-stage cascade and sampling heuristics. The proposal stage (e.g., [Selective Search](https://paperswithcode.com/method/selective-search), [EdgeBoxes](https://paperswithcode.com/method/edgeboxes), [DeepMask](https://paperswithcode.com/method/deepmask), [RPN](https://paperswithcode.com/method/rpn)) rapidly narrows down the number of candidate object locations to a small number (e.g., 1-2k), filtering out most background samples. In the second classification stage, sampling heuristics, such as a fixed foreground-to-background ratio, or online hard example mining ([OHEM](https://paperswithcode.com/method/ohem)), are performed to maintain a\r\nmanageable balance between foreground and background.\r\n\r\nIn contrast, a one-stage detector must process a much larger set of candidate object locations regularly sampled across an image. To tackle this, RetinaNet uses a focal loss function, a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confidence in the correct class increases. Intuitively, this scaling factor can automatically down-weight the contribution of easy examples during training and rapidly focus the model on hard examples. \r\n\r\nFormally, the Focal Loss adds a factor $(1 - p\\_{t})^\\gamma$ to the standard cross entropy criterion. Setting $\\gamma>0$ reduces the relative loss for well-classified examples ($p\\_{t}>.5$), putting more focus on hard, misclassified examples. Here there is tunable *focusing* parameter $\\gamma \\ge 0$. \r\n\r\n$$ {\\text{FL}(p\\_{t}) = - (1 - p\\_{t})^\\gamma \\log\\left(p\\_{t}\\right)} $$",
"full_name": "RetinaNet",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "RetinaNet",
"source_title": "Focal Loss for Dense Object Detection",
"source_url": "http://arxiv.org/abs/1708.02002v2"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/facebookresearch/detectron2/blob/bb9f5d8e613358519c9865609ab3fe7b6571f2ba/detectron2/layers/roi_align.py#L51",
"description": "**Region of Interest Align**, or **RoIAlign**, is an operation for extracting a small feature map from each RoI in detection and segmentation based tasks. It removes the harsh quantization of [RoI Pool](https://paperswithcode.com/method/roi-pooling), properly *aligning* the extracted features with the input. To avoid any quantization of the RoI boundaries or bins (using $x/16$ instead of $[x/16]$), RoIAlign uses bilinear interpolation to compute the exact values of the input features at four regularly sampled locations in each RoI bin, and the result is then aggregated (using max or average).",
"full_name": "RoIAlign",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIAlign",
"source_title": "Mask R-CNN",
"source_url": "http://arxiv.org/abs/1703.06870v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L177",
"description": "**Group Normalization** is a normalization layer that divides channels into groups and normalizes the features within each group. GN does not exploit the batch dimension, and its computation is independent of batch sizes. In the case where the group size is 1, it is equivalent to [Instance Normalization](https://paperswithcode.com/method/instance-normalization).\r\n\r\nAs motivation for the method, many classical features like SIFT and HOG had *group-wise* features and involved *group-wise normalization*. For example, a HOG vector is the outcome of several spatial cells where each cell is represented by a normalized orientation histogram.\r\n\r\nFormally, Group Normalization is defined as:\r\n\r\n$$ \\mu\\_{i} = \\frac{1}{m}\\sum\\_{k\\in\\mathcal{S}\\_{i}}x\\_{k} $$\r\n\r\n$$ \\sigma^{2}\\_{i} = \\frac{1}{m}\\sum\\_{k\\in\\mathcal{S}\\_{i}}\\left(x\\_{k}-\\mu\\_{i}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{i}}{\\sqrt{\\sigma^{2}\\_{i}+\\epsilon}} $$\r\n\r\nHere $x$ is the feature computed by a layer, and $i$ is an index. Formally, a Group Norm layer computes $\\mu$ and $\\sigma$ in a set $\\mathcal{S}\\_{i}$ defined as: $\\mathcal{S}\\_{i} = ${$k \\mid k\\_{N} = i\\_{N} ,\\lfloor\\frac{k\\_{C}}{C/G}\\rfloor = \\lfloor\\frac{I\\_{C}}{C/G}\\rfloor $}.\r\n\r\nHere $G$ is the number of groups, which is a pre-defined hyper-parameter ($G = 32$ by default). $C/G$ is the number of channels per group. $\\lfloor$ is the floor operation, and the final term means that the indexes $i$ and $k$ are in the same group of channels, assuming each group of channels are stored in a sequential order along the $C$ axis.",
"full_name": "Group Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Group Normalization",
"source_title": "Group Normalization",
"source_url": "http://arxiv.org/abs/1803.08494v3"
},
{
"code_snippet_url": "https://github.com/facebookresearch/detectron2/blob/601d7666faaf7eb0ba64c9f9ce5811b13861fe12/detectron2/modeling/roi_heads/mask_head.py#L154",
"description": "**Mask R-CNN** extends [Faster R-CNN](http://paperswithcode.com/method/faster-r-cnn) to solve instance segmentation tasks. It achieves this by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. In principle, Mask R-CNN is an intuitive extension of Faster [R-CNN](https://paperswithcode.com/method/r-cnn), but constructing the mask branch properly is critical for good results. \r\n\r\nMost importantly, Faster R-CNN was not designed for pixel-to-pixel alignment between network inputs and outputs. This is evident in how [RoIPool](http://paperswithcode.com/method/roi-pooling), the *de facto* core operation for attending to instances, performs coarse spatial quantization for feature extraction. To fix the misalignment, Mask R-CNN utilises a simple, quantization-free layer, called [RoIAlign](http://paperswithcode.com/method/roi-align), that faithfully preserves exact spatial locations. \r\n\r\nSecondly, Mask R-CNN *decouples* mask and class prediction: it predicts a binary mask for each class independently, without competition among classes, and relies on the network's RoI classification branch to predict the category. In contrast, an [FCN](http://paperswithcode.com/method/fcn) usually perform per-pixel multi-class categorization, which couples segmentation and classification.",
"full_name": "Mask R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Instance Segmentation** models are models that perform the task of [Instance Segmentation](https://paperswithcode.com/task/instance-segmentation).",
"name": "Instance Segmentation Models",
"parent": null
},
"name": "Mask R-CNN",
"source_title": "Mask R-CNN",
"source_url": "http://arxiv.org/abs/1703.06870v3"
}
] |
https://paperswithcode.com/paper/linear-convergence-of-gradient-and-proximal
|
1608.04636
| null | null |
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
|
In 1963, Polyak proposed a simple condition that is sufficient to show a global linear convergence rate for gradient descent. This condition is a special case of the \L{}ojasiewicz inequality proposed in the same year, and it does not require strong convexity (or even convexity). In this work, we show that this much-older Polyak-\L{}ojasiewicz (PL) inequality is actually weaker than the main conditions that have been explored to show linear convergence rates without strong convexity over the last 25 years. We also use the PL inequality to give new analyses of randomized and greedy coordinate descent methods, sign-based gradient descent methods, and stochastic gradient methods in the classic setting (with decreasing or constant step-sizes) as well as the variance-reduced setting. We further propose a generalization that applies to proximal-gradient methods for non-smooth optimization, leading to simple proofs of linear convergence of these methods. Along the way, we give simple convergence results for a wide variety of problems in machine learning: least squares, logistic regression, boosting, resilient backpropagation, L1-regularization, support vector machines, stochastic dual coordinate ascent, and stochastic variance-reduced gradient methods.
| null |
https://arxiv.org/abs/1608.04636v4
|
https://arxiv.org/pdf/1608.04636v4.pdf
| null |
[
"Hamed Karimi",
"Julie Nutini",
"Mark Schmidt"
] |
[] | 2016-08-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lets-do-it-again-a-first-computational
|
1806.04262
| null | null |
Let's do it "again": A First Computational Approach to Detecting Adverbial Presupposition Triggers
|
We introduce the task of predicting adverbial presupposition triggers such as
also and again. Solving such a task requires detecting recurring or similar
events in the discourse context, and has applications in natural language
generation tasks such as summarization and dialogue systems. We create two new
datasets for the task, derived from the Penn Treebank and the Annotated English
Gigaword corpora, as well as a novel attention mechanism tailored to this task.
Our attention mechanism augments a baseline recurrent neural network without
the need for additional trainable parameters, minimizing the added
computational cost of our mechanism. We demonstrate that our model
statistically outperforms a number of baselines, including an LSTM-based
language model.
| null |
http://arxiv.org/abs/1806.04262v1
|
http://arxiv.org/pdf/1806.04262v1.pdf
| null |
[
"Andre Cianflone",
"Yulan Feng",
"Jad Kabbara",
"Jackie Chi Kit Cheung"
] |
[
"Language Modeling",
"Language Modelling",
"Text Generation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-whole-slide-segmentation-through
|
1806.04259
| null | null |
Improving Whole Slide Segmentation Through Visual Context - A Systematic Study
|
While challenging, the dense segmentation of histology images is a necessary
first step to assess changes in tissue architecture and cellular morphology.
Although specific convolutional neural network architectures have been applied
with great success to the problem, few effectively incorporate visual context
information from multiple scales. With this paper, we present a systematic
comparison of different architectures to assess how including multi-scale
information affects segmentation performance. A publicly available breast
cancer and a locally collected prostate cancer datasets are being utilised for
this study. The results support our hypothesis that visual context and scale
play a crucial role in histology image classification problems.
|
While challenging, the dense segmentation of histology images is a necessary first step to assess changes in tissue architecture and cellular morphology.
|
http://arxiv.org/abs/1806.04259v1
|
http://arxiv.org/pdf/1806.04259v1.pdf
| null |
[
"Korsuk Sirinukunwattana",
"Nasullah Khalid Alham",
"Clare Verrill",
"Jens Rittscher"
] |
[
"General Classification",
"image-classification",
"Image Classification",
"Segmentation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/can-machine-learning-identify-interesting
|
1805.07431
| null | null |
Can machine learning identify interesting mathematics? An exploration using empirically observed laws
|
We explore the possibility of using machine learning to identify interesting
mathematical structures by using certain quantities that serve as fingerprints.
In particular, we extract features from integer sequences using two empirical
laws: Benford's law and Taylor's law and experiment with various classifiers to
identify whether a sequence is, for example, nice, important, multiplicative,
easy to compute or related to primes or palindromes.
| null |
http://arxiv.org/abs/1805.07431v3
|
http://arxiv.org/pdf/1805.07431v3.pdf
| null |
[
"Chai Wah Wu"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-step-reinforcement-learning-a-unifying
|
1703.01327
| null | null |
Multi-step Reinforcement Learning: A Unifying Algorithm
|
Unifying seemingly disparate algorithmic ideas to produce better performing
algorithms has been a longstanding goal in reinforcement learning. As a primary
example, TD($\lambda$) elegantly unifies one-step TD prediction with Monte
Carlo methods through the use of eligibility traces and the trace-decay
parameter $\lambda$. Currently, there are a multitude of algorithms that can be
used to perform TD control, including Sarsa, $Q$-learning, and Expected Sarsa.
These methods are often studied in the one-step case, but they can be extended
across multiple time steps to achieve better performance. Each of these
algorithms is seemingly distinct, and no one dominates the others for all
problems. In this paper, we study a new multi-step action-value algorithm
called $Q(\sigma)$ which unifies and generalizes these existing algorithms,
while subsuming them as special cases. A new parameter, $\sigma$, is introduced
to allow the degree of sampling performed by the algorithm at each step during
its backup to be continuously varied, with Sarsa existing at one extreme (full
sampling), and Expected Sarsa existing at the other (pure expectation).
$Q(\sigma)$ is generally applicable to both on- and off-policy learning, but in
this work we focus on experiments in the on-policy case. Our results show that
an intermediate value of $\sigma$, which results in a mixture of the existing
algorithms, performs better than either extreme. The mixture can also be varied
dynamically which can result in even greater performance.
| null |
http://arxiv.org/abs/1703.01327v2
|
http://arxiv.org/pdf/1703.01327v2.pdf
| null |
[
"Kristopher De Asis",
"J. Fernando Hernandez-Garcia",
"G. Zacharias Holland",
"Richard S. Sutton"
] |
[
"Q-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2017-03-03T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Expected Sarsa** is like [Q-learning](https://paperswithcode.com/method/q-learning) but instead of taking the maximum over next state-action pairs, we use the expected value, taking into account how likely each action is under the current policy.\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\sum\\_{a}\\pi\\left(a\\mid{S\\_{t+1}}\\right)Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nExcept for this change to the update rule, the algorithm otherwise follows the scheme of Q-learning. It is more computationally expensive than [Sarsa](https://paperswithcode.com/method/sarsa) but it eliminates the variance due to the random selection of $A\\_{t+1}$.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Expected Sarsa",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "On-Policy TD Control",
"parent": null
},
"name": "Expected Sarsa",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Sarsa** is an on-policy TD control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma{Q}\\left(S\\_{t+1}, A\\_{t+1}\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThis update is done after every transition from a nonterminal state $S\\_{t}$. if $S\\_{t+1}$ is terminal, then $Q\\left(S\\_{t+1}, A\\_{t+1}\\right)$ is defined as zero.\r\n\r\nTo design an on-policy control algorithm using Sarsa, we estimate $q\\_{\\pi}$ for a behaviour policy $\\pi$ and then change $\\pi$ towards greediness with respect to $q\\_{\\pi}$.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Sarsa",
"introduced_year": 1994,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "On-Policy TD Control",
"parent": null
},
"name": "Sarsa",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/complexity-theory-for-discrete-black-box
|
1801.02037
| null | null |
Complexity Theory for Discrete Black-Box Optimization Heuristics
|
A predominant topic in the theory of evolutionary algorithms and, more
generally, theory of randomized black-box optimization techniques is running
time analysis. Running time analysis aims at understanding the performance of a
given heuristic on a given problem by bounding the number of function
evaluations that are needed by the heuristic to identify a solution of a
desired quality. As in general algorithms theory, this running time perspective
is most useful when it is complemented by a meaningful complexity theory that
studies the limits of algorithmic solutions.
In the context of discrete black-box optimization, several black-box
complexity models have been developed to analyze the best possible performance
that a black-box optimization algorithm can achieve on a given problem. The
models differ in the classes of algorithms to which these lower bounds apply.
This way, black-box complexity contributes to a better understanding of how
certain algorithmic choices (such as the amount of memory used by a heuristic,
its selective pressure, or properties of the strategies that it uses to create
new solution candidates) influences performance.
In this chapter we review the different black-box complexity models that have
been proposed in the literature, survey the bounds that have been obtained for
these models, and discuss how the interplay of running time analysis and
black-box complexity can inspire new algorithmic solutions to well-researched
problems in evolutionary computation. We also discuss in this chapter several
interesting open questions for future work.
| null |
http://arxiv.org/abs/1801.02037v2
|
http://arxiv.org/pdf/1801.02037v2.pdf
| null |
[
"Carola Doerr"
] |
[
"Evolutionary Algorithms"
] | 2018-01-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/revisiting-adversarial-risk
|
1806.02924
| null | null |
Revisiting Adversarial Risk
|
Recent works on adversarial perturbations show that there is an inherent
trade-off between standard test accuracy and adversarial accuracy.
Specifically, they show that no classifier can simultaneously be robust to
adversarial perturbations and achieve high standard test accuracy. However,
this is contrary to the standard notion that on tasks such as image
classification, humans are robust classifiers with low error rate. In this
work, we show that the main reason behind this confusion is the inexact
definition of adversarial perturbation that is used in the literature. To fix
this issue, we propose a slight, yet important modification to the existing
definition of adversarial perturbation. Based on the modified definition, we
show that there is no trade-off between adversarial and standard accuracies;
there exist classifiers that are robust and achieve high standard accuracy. We
further study several properties of this new definition of adversarial risk and
its relation to the existing definition.
| null |
http://arxiv.org/abs/1806.02924v5
|
http://arxiv.org/pdf/1806.02924v5.pdf
| null |
[
"Arun Sai Suggala",
"Adarsh Prasad",
"Vaishnavh Nagarajan",
"Pradeep Ravikumar"
] |
[
"image-classification",
"Image Classification"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/theory-of-parameter-control-for-discrete
|
1804.05650
| null | null |
Theory of Parameter Control for Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices
|
Parameter control aims at realizing performance gains through a dynamic choice of the parameters which determine the behavior of the underlying optimization algorithm. In the context of evolutionary algorithms this research line has for a long time been dominated by empirical approaches. With the significant advances in running time analysis achieved in the last ten years, the parameter control question has become accessible to theoretical investigations. A number of running time results for a broad range of different parameter control mechanisms have been obtained in recent years. This book chapter surveys these works, and puts them into context, by proposing an updated classification scheme for parameter control.
| null |
https://arxiv.org/abs/1804.05650v3
|
https://arxiv.org/pdf/1804.05650v3.pdf
| null |
[
"Benjamin Doerr",
"Carola Doerr"
] |
[
"Evolutionary Algorithms",
"General Classification"
] | 2018-04-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-speed-up-structured-output
|
1806.04245
| null | null |
Learning to Speed Up Structured Output Prediction
|
Predicting structured outputs can be computationally onerous due to the
combinatorially large output spaces. In this paper, we focus on reducing the
prediction time of a trained black-box structured classifier without losing
accuracy. To do so, we train a speedup classifier that learns to mimic a
black-box classifier under the learning-to-search approach. As the structured
classifier predicts more examples, the speedup classifier will operate as a
learned heuristic to guide search to favorable regions of the output space. We
present a mistake bound for the speedup classifier and identify inference
situations where it can independently make correct judgments without input
features. We evaluate our method on the task of entity and relation extraction
and show that the speedup classifier outperforms even greedy search in terms of
speed without loss of accuracy.
| null |
http://arxiv.org/abs/1806.04245v1
|
http://arxiv.org/pdf/1806.04245v1.pdf
|
ICML 2018 7
|
[
"Xingyuan Pan",
"Vivek Srikumar"
] |
[
"Prediction",
"Relation Extraction"
] | 2018-06-11T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2489
|
http://proceedings.mlr.press/v80/pan18b/pan18b.pdf
|
learning-to-speed-up-structured-output-1
| null |
[] |
https://paperswithcode.com/paper/the-potential-of-the-return-distribution-for
|
1806.04242
| null | null |
The Potential of the Return Distribution for Exploration in RL
|
This paper studies the potential of the return distribution for exploration
in deterministic reinforcement learning (RL) environments. We study network
losses and propagation mechanisms for Gaussian, Categorical and Gaussian
mixture distributions. Combined with exploration policies that leverage this
return distribution, we solve, for example, a randomized Chain task of length
100, which has not been reported before when learning with neural networks.
|
This paper studies the potential of the return distribution for exploration in deterministic reinforcement learning (RL) environments.
|
http://arxiv.org/abs/1806.04242v2
|
http://arxiv.org/pdf/1806.04242v2.pdf
| null |
[
"Thomas M. Moerland",
"Joost Broekens",
"Catholijn M. Jonker"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-turn-dialogue-response-generation-in-an
|
1805.11752
| null |
SJxzPsAqFQ
|
Multi-turn Dialogue Response Generation in an Adversarial Learning Framework
|
We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.
| null |
https://arxiv.org/abs/1805.11752v5
|
https://arxiv.org/pdf/1805.11752v5.pdf
|
WS 2019 8
|
[
"Oluwatobi Olabiyi",
"Alan Salimov",
"Anish Khazane",
"Erik T. Mueller"
] |
[
"Decoder",
"Response Generation",
"Word Embeddings"
] | 2018-05-30T00:00:00 |
https://aclanthology.org/W19-4114
|
https://aclanthology.org/W19-4114.pdf
|
multi-turn-dialogue-response-generation-in-an-2
| null |
[] |
https://paperswithcode.com/paper/lecture-notes-on-fair-division
|
1806.04234
| null | null |
Lecture Notes on Fair Division
|
Fair division is the problem of dividing one or several goods amongst two or
more agents in a way that satisfies a suitable fairness criterion. These Notes
provide a succinct introduction to the field. We cover three main topics.
First, we need to define what is to be understood by a "fair" allocation of
goods to individuals. We present an overview of the most important fairness
criteria (as well as the closely related criteria for economic efficiency)
developed in the literature, together with a short discussion of their
axiomatic foundations. Second, we give an introduction to cake-cutting
procedures as an example of methods for fairly dividing a single divisible
resource amongst a group of individuals. Third, we discuss the combinatorial
optimisation problem of fairly allocating a set of indivisible goods to a group
of agents, covering both centralised algorithms (similar to auctions) and a
distributed approach based on negotiation.
While the classical literature on fair division has largely developed within
Economics, these Notes are specifically written for readers with a background
in Computer Science or similar, and who may be (or may wish to be) engaged in
research in Artificial Intelligence, Multiagent Systems, or Computational
Social Choice. References for further reading, as well as a small number of
exercises, are included.
Notes prepared for a tutorial at the 11th European Agent Systems Summer
School (EASSS-2009), Torino, Italy, 31 August and 1 September 2009. Updated for
a tutorial at the COST-ADT Doctoral School on Computational Social Choice,
Estoril, Portugal, 9--14 April 2010.
| null |
http://arxiv.org/abs/1806.04234v1
|
http://arxiv.org/pdf/1806.04234v1.pdf
| null |
[
"Ulle Endriss"
] |
[
"Fairness"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/physical-representation-based-predicate
|
1806.04226
| null | null |
Physical Representation-based Predicate Optimization for a Visual Analytics Database
|
Querying the content of images, video, and other non-textual data sources
requires expensive content extraction methods. Modern extraction techniques are
based on deep convolutional neural networks (CNNs) and can classify objects
within images with astounding accuracy. Unfortunately, these methods are slow:
processing a single image can take about 10 milliseconds on modern GPU-based
hardware. As massive video libraries become ubiquitous, running a content-based
query over millions of video frames is prohibitive.
One promising approach to reduce the runtime cost of queries of visual
content is to use a hierarchical model, such as a cascade, where simple cases
are handled by an inexpensive classifier. Prior work has sought to design
cascades that optimize the computational cost of inference by, for example,
using smaller CNNs. However, we observe that there are critical factors besides
the inference time that dramatically impact the overall query time. Notably, by
treating the physical representation of the input image as part of our query
optimization---that is, by including image transforms, such as resolution
scaling or color-depth reduction, within the cascade---we can optimize data
handling costs and enable drastically more efficient classifier cascades.
In this paper, we propose Tahoma, which generates and evaluates many
potential classifier cascades that jointly optimize the CNN architecture and
input data representation. Our experiments on a subset of ImageNet show that
Tahoma's input transformations speed up cascades by up to 35 times. We also
find up to a 98x speedup over the ResNet50 classifier with no loss in accuracy,
and a 280x speedup if some accuracy is sacrificed.
| null |
http://arxiv.org/abs/1806.04226v3
|
http://arxiv.org/pdf/1806.04226v3.pdf
| null |
[
"Michael R. Anderson",
"Michael Cafarella",
"German Ros",
"Thomas F. Wenisch"
] |
[
"GPU"
] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/neuronet-fast-and-robust-reproduction-of
|
1806.04224
| null | null |
NeuroNet: Fast and Robust Reproduction of Multiple Brain Image Segmentation Pipelines
|
NeuroNet is a deep convolutional neural network mimicking multiple popular
and state-of-the-art brain segmentation tools including FSL, SPM, and MALPEM.
The network is trained on 5,000 T1-weighted brain MRI scans from the UK Biobank
Imaging Study that have been automatically segmented into brain tissue and
cortical and sub-cortical structures using the standard neuroimaging pipelines.
Training a single model from these complementary and partially overlapping
label maps yields a new powerful "all-in-one", multi-output segmentation tool.
The processing time for a single subject is reduced by an order of magnitude
compared to running each individual software package. We demonstrate very good
reproducibility of the original outputs while increasing robustness to
variations in the input data. We believe NeuroNet could be an important tool in
large-scale population imaging studies and serve as a new standard in
neuroscience by reducing the risk of introducing bias when choosing a specific
software package.
|
NeuroNet is a deep convolutional neural network mimicking multiple popular and state-of-the-art brain segmentation tools including FSL, SPM, and MALPEM.
|
http://arxiv.org/abs/1806.04224v1
|
http://arxiv.org/pdf/1806.04224v1.pdf
| null |
[
"Martin Rajchl",
"Nick Pawlowski",
"Daniel Rueckert",
"Paul M. Matthews",
"Ben Glocker"
] |
[
"Brain Image Segmentation",
"Brain Segmentation",
"Image Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/collaborative-human-ai-chai-evidence-based
|
1805.12234
| null | null |
Collaborative Human-AI (CHAI): Evidence-Based Interpretable Melanoma Classification in Dermoscopic Images
|
Automated dermoscopic image analysis has witnessed rapid growth in diagnostic
performance. Yet adoption faces resistance, in part, because no evidence is
provided to support decisions. In this work, an approach for evidence-based
classification is presented. A feature embedding is learned with CNNs,
triplet-loss, and global average pooling, and used to classify via kNN search.
Evidence is provided as both the discovered neighbors, as well as localized
image regions most relevant to measuring distance between query and neighbors.
To ensure that results are relevant in terms of both label accuracy and human
visual similarity for any skill level, a novel hierarchical triplet logic is
implemented to jointly learn an embedding according to disease labels and
non-expert similarity. Results are improved over baselines trained on disease
labels alone, as well as standard multiclass loss. Quantitative relevance of
results, according to non-expert similarity, as well as localized image
regions, are also significantly improved.
|
Quantitative relevance of results, according to non-expert similarity, as well as localized image regions, are also significantly improved.
|
http://arxiv.org/abs/1805.12234v3
|
http://arxiv.org/pdf/1805.12234v3.pdf
| null |
[
"Noel C. F. Codella",
"Chung-Ching Lin",
"Allan Halpern",
"Michael Hind",
"Rogerio Feris",
"John R. Smith"
] |
[
"Diagnostic",
"General Classification",
"Triplet"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-agent-path-finding-with-deadlines
|
1806.04216
| null | null |
Multi-Agent Path Finding with Deadlines
|
We formalize Multi-Agent Path Finding with Deadlines (MAPF-DL). The objective
is to maximize the number of agents that can reach their given goal vertices
from their given start vertices within the deadline, without colliding with
each other. We first show that MAPF-DL is NP-hard to solve optimally. We then
present two classes of optimal algorithms, one based on a reduction of MAPF-DL
to a flow problem and a subsequent compact integer linear programming
formulation of the resulting reduced abstracted multi-commodity flow network
and the other one based on novel combinatorial search algorithms. Our empirical
results demonstrate that these MAPF-DL solvers scale well and each one
dominates the other ones in different scenarios.
| null |
http://arxiv.org/abs/1806.04216v1
|
http://arxiv.org/pdf/1806.04216v1.pdf
| null |
[
"Hang Ma",
"Glenn Wagner",
"Ariel Felner",
"Jiaoyang Li",
"T. K. Satish Kumar",
"Sven Koenig"
] |
[
"Multi-Agent Path Finding"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/universality-of-the-stochastic-block-model
|
1806.04214
| null | null |
Universality of the stochastic block model
|
Mesoscopic pattern extraction (MPE) is the problem of finding a partition of
the nodes of a complex network that maximizes some objective function. Many
well-known network inference problems fall in this category, including, for
instance, community detection, core-periphery identification, and imperfect
graph coloring. In this paper, we show that the most popular algorithms
designed to solve MPE problems can in fact be understood as special cases of
the maximum likelihood formulation of the stochastic block model (SBM), or one
of its direct generalizations. These equivalence relations show that the SBM is
nearly universal with respect to MPE problems.
| null |
http://arxiv.org/abs/1806.04214v2
|
http://arxiv.org/pdf/1806.04214v2.pdf
| null |
[
"Jean-Gabriel Young",
"Guillaume St-Onge",
"Patrick Desrosiers",
"Louis J. Dubé"
] |
[
"Community Detection",
"model",
"Stochastic Block Model"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/matching-with-text-data-an-experimental
|
1801.00644
| null | null |
Matching with Text Data: An Experimental Evaluation of Methods for Matching Documents and of Measuring Match Quality
|
Matching for causal inference is a well-studied problem, but standard methods
fail when the units to match are text documents: the high-dimensional and rich
nature of the data renders exact matching infeasible, causes propensity scores
to produce incomparable matches, and makes assessing match quality difficult.
In this paper, we characterize a framework for matching text documents that
decomposes existing methods into: (1) the choice of text representation, and
(2) the choice of distance metric. We investigate how different choices within
this framework affect both the quantity and quality of matches identified
through a systematic multifactor evaluation experiment using human subjects.
Altogether we evaluate over 100 unique text matching methods along with 5
comparison methods taken from the literature. Our experimental results identify
methods that generate matches with higher subjective match quality than current
state-of-the-art techniques. We enhance the precision of these results by
developing a predictive model to estimate the match quality of pairs of text
documents as a function of our various distance scores. This model, which we
find successfully mimics human judgment, also allows for approximate and
unsupervised evaluation of new procedures. We then employ the identified best
method to illustrate the utility of text matching in two applications. First,
we engage with a substantive debate in the study of media bias by using text
matching to control for topic selection when comparing news articles from
thirteen news sources. We then show how conditioning on text data leads to more
precise causal inferences in an observational study examining the effects of a
medical intervention.
|
We enhance the precision of these results by developing a predictive model to estimate the match quality of pairs of text documents as a function of our various distance scores.
|
http://arxiv.org/abs/1801.00644v7
|
http://arxiv.org/pdf/1801.00644v7.pdf
| null |
[
"Reagan Mozer",
"Luke Miratrix",
"Aaron Russell Kaufman",
"L. Jason Anastasopoulos"
] |
[
"Articles",
"Causal Inference",
"Text Matching"
] | 2018-01-02T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Causal inference is the process of drawing a conclusion about a causal connection based on the conditions of the occurrence of an effect. The main difference between causal inference and inference of association is that the former analyzes the response of the effect variable when the cause is changed.",
"full_name": "Causal inference",
"introduced_year": 2000,
"main_collection": null,
"name": "Causal inference",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/how-curiosity-can-be-modeled-for-a-clickbait
|
1806.04212
| null | null |
How Curiosity can be modeled for a Clickbait Detector
|
The impact of continually evolving digital technologies and the proliferation
of communications and content has now been widely acknowledged to be central to
understanding our world. What is less acknowledged is that this is based on the
successful arousing of curiosity both at the collective and individual levels.
Advertisers, communication professionals and news editors are in constant
competition to capture attention of the digital population perennially shifty
and distracted. This paper, tries to understand how curiosity works in the
digital world by attempting the first ever work done on quantifying human
curiosity, basing itself on various theories drawn from humanities and social
sciences. Curious communication pushes people to spot, read and click the
message from their social feed or any other form of online presentation. Our
approach focuses on measuring the strength of the stimulus to generate reader
curiosity by using unsupervised and supervised machine learning algorithms, but
is also informed by philosophical, psychological, neural and cognitive studies
on this topic. Manually annotated news headlines - clickbaits - have been
selected for the study, which are known to have drawn huge reader response. A
binary classifier was developed based on human curiosity (unlike the work done
so far using words and other linguistic features). Our classifier shows an
accuracy of 97% . This work is part of the research in computational humanities
on digital politics quantifying the emotions of curiosity and outrage on
digital media.
| null |
http://arxiv.org/abs/1806.04212v1
|
http://arxiv.org/pdf/1806.04212v1.pdf
| null |
[
"Lasya Venneti",
"Aniket Alam"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/temporal-difference-variational-auto-encoder
|
1806.03107
| null |
S1x4ghC9tQ
|
Temporal Difference Variational Auto-Encoder
|
To act and plan in complex environments, we posit that agents should have a
mental simulator of the world with three characteristics: (a) it should build
an abstract state representing the condition of the world; (b) it should form a
belief which represents uncertainty on the world; (c) it should go beyond
simple step-by-step simulation, and exhibit temporal abstraction. Motivated by
the absence of a model satisfying all these requirements, we propose TD-VAE, a
generative sequence model that learns representations containing explicit
beliefs about states several steps into the future, and that can be rolled out
directly without single-step transitions. TD-VAE is trained on pairs of
temporally separated time points, using an analogue of temporal difference
learning used in reinforcement learning.
|
To act and plan in complex environments, we posit that agents should have a mental simulator of the world with three characteristics: (a) it should build an abstract state representing the condition of the world; (b) it should form a belief which represents uncertainty on the world; (c) it should go beyond simple step-by-step simulation, and exhibit temporal abstraction.
|
http://arxiv.org/abs/1806.03107v3
|
http://arxiv.org/pdf/1806.03107v3.pdf
|
ICLR 2019 5
|
[
"Karol Gregor",
"George Papamakarios",
"Frederic Besse",
"Lars Buesing",
"Theophane Weber"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-06-08T00:00:00 |
https://openreview.net/forum?id=S1x4ghC9tQ
|
https://openreview.net/pdf?id=S1x4ghC9tQ
|
temporal-difference-variational-auto-encoder-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**TD-VAE**, or **Temporal Difference VAE**, is a generative sequence model that learns representations containing explicit beliefs about states several steps into the future, and that can be rolled out directly without single-step transitions. TD-VAE is trained on pairs of temporally separated time points, using an analogue of [temporal difference learning](https://paperswithcode.com/method/td-lambda) used in reinforcement learning.",
"full_name": "TD-VAE",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Generative Sequence Models",
"parent": null
},
"name": "TD-VAE",
"source_title": "Temporal Difference Variational Auto-Encoder",
"source_url": "http://arxiv.org/abs/1806.03107v3"
}
] |
https://paperswithcode.com/paper/in-ictu-oculi-exposing-ai-generated-fake-face
|
1806.02877
| null | null |
In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking
|
The new developments in deep generative networks have significantly improve
the quality and efficiency in generating realistically-looking fake face
videos. In this work, we describe a new method to expose fake face videos
generated with neural networks. Our method is based on detection of eye
blinking in the videos, which is a physiological signal that is not well
presented in the synthesized fake videos. Our method is tested over benchmarks
of eye-blinking detection datasets and also show promising performance on
detecting videos generated with DeepFake.
|
The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos.
|
http://arxiv.org/abs/1806.02877v2
|
http://arxiv.org/pdf/1806.02877v2.pdf
| null |
[
"Yuezun Li",
"Ming-Ching Chang",
"Siwei Lyu"
] |
[
"Face Swapping"
] | 2018-06-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/swarming-for-faster-convergence-in-stochastic
|
1806.04207
| null | null |
Swarming for Faster Convergence in Stochastic Optimization
|
We study a distributed framework for stochastic optimization which is
inspired by models of collective motion found in nature (e.g., swarming) with
mild communication requirements. Specifically, we analyze a scheme in which
each one of $N > 1$ independent threads, implements in a distributed and
unsynchronized fashion, a stochastic gradient-descent algorithm which is
perturbed by a swarming potential. Assuming the overhead caused by
synchronization is not negligible, we show the swarming-based approach exhibits
better performance than a centralized algorithm (based upon the average of $N$
observations) in terms of (real-time) convergence speed. We also derive an
error bound that is monotone decreasing in network size and connectivity. We
characterize the scheme's finite-time performances for both convex and
non-convex objective functions.
| null |
http://arxiv.org/abs/1806.04207v2
|
http://arxiv.org/pdf/1806.04207v2.pdf
| null |
[
"Shi Pu",
"Alfredo Garcia"
] |
[
"Stochastic Optimization"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-note-about-local-explanation-methods-for
|
1806.04205
| null | null |
A Note about: Local Explanation Methods for Deep Neural Networks lack Sensitivity to Parameter Values
|
Local explanation methods, also known as attribution methods, attribute a
deep network's prediction to its input (cf. Baehrens et al. (2010)). We respond
to the claim from Adebayo et al. (2018) that local explanation methods lack
sensitivity, i.e., DNNs with randomly-initialized weights produce explanations
that are both visually and quantitatively similar to those produced by DNNs
with learned weights.
Further investigation reveals that their findings are due to two choices in
their analysis: (a) ignoring the signs of the attributions; and (b) for
integrated gradients (IG), including pixels in their analysis that have zero
attributions by choice of the baseline (an auxiliary input relative to which
the attributions are computed). When both factors are accounted for, IG
attributions for a random network and the actual network are uncorrelated. Our
investigation also sheds light on how these issues affect visualizations,
although we note that more work is needed to understand how viewers interpret
the difference between the random and the actual attributions.
| null |
http://arxiv.org/abs/1806.04205v1
|
http://arxiv.org/pdf/1806.04205v1.pdf
| null |
[
"Mukund Sundararajan",
"Ankur Taly"
] |
[
"Attribute",
"Sensitivity"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/calibrating-noise-to-variance-in-adaptive
|
1712.07196
| null | null |
Calibrating Noise to Variance in Adaptive Data Analysis
|
Datasets are often used multiple times and each successive analysis may
depend on the outcome of previous analyses. Standard techniques for ensuring
generalization and statistical validity do not account for this adaptive
dependence. A recent line of work studies the challenges that arise from such
adaptive data reuse by considering the problem of answering a sequence of
"queries" about the data distribution where each query may depend arbitrarily
on answers to previous queries.
The strongest results obtained for this problem rely on differential privacy
-- a strong notion of algorithmic stability with the important property that it
"composes" well when data is reused. However the notion is rather strict, as it
requires stability under replacement of an arbitrary data element. The simplest
algorithm is to add Gaussian (or Laplace) noise to distort the empirical
answers. However, analysing this technique using differential privacy yields
suboptimal accuracy guarantees when the queries have low variance. Here we
propose a relaxed notion of stability that also composes adaptively. We
demonstrate that a simple and natural algorithm based on adding noise scaled to
the standard deviation of the query provides our notion of stability. This
implies an algorithm that can answer statistical queries about the dataset with
substantially improved accuracy guarantees for low-variance queries. The only
previous approach that provides such accuracy guarantees is based on a more
involved differentially private median-of-means algorithm and its analysis
exploits stronger "group" stability of the algorithm.
| null |
http://arxiv.org/abs/1712.07196v2
|
http://arxiv.org/pdf/1712.07196v2.pdf
| null |
[
"Vitaly Feldman",
"Thomas Steinke"
] |
[] | 2017-12-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/degree-based-classification-of-harmful-speech
|
1806.04197
| null | null |
Degree based Classification of Harmful Speech using Twitter Data
|
Harmful speech has various forms and it has been plaguing the social media in
different ways. If we need to crackdown different degrees of hate speech and
abusive behavior amongst it, the classification needs to be based on complex
ramifications which needs to be defined and hold accountable for, other than
racist, sexist or against some particular group and community. This paper
primarily describes how we created an ontological classification of harmful
speech based on degree of hateful intent, and used it to annotate twitter data
accordingly. The key contribution of this paper is the new dataset of tweets we
created based on ontological classes and degrees of harmful speech found in the
text. We also propose supervised classification system for recognizing these
respective harmful speech classes in the texts hence.
| null |
http://arxiv.org/abs/1806.04197v1
|
http://arxiv.org/pdf/1806.04197v1.pdf
|
COLING 2018 8
|
[
"Sanjana Sharma",
"Saksham Agrawal",
"Manish Shrivastava"
] |
[
"Classification",
"General Classification"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/W18-4413
|
https://aclanthology.org/W18-4413.pdf
|
degree-based-classification-of-harmful-speech-1
| null |
[] |
https://paperswithcode.com/paper/enhancing-human-color-vision-by-breaking
|
1703.04392
| null | null |
Enhancing human color vision by breaking binocular redundancy
|
To see color, the human visual system combines the response of three types of
cone cells in the retina--a compressive process that discards a significant
amount of spectral information. Here, we present an approach to enhance human
color vision by breaking its inherent binocular redundancy, providing different
spectral content to each eye. We fabricated a set of optical filters that
"splits" the response of the short-wavelength cone between the two eyes in
individuals with typical trichromatic vision, simulating the presence of
approximately four distinct cone types ("tetrachromacy"). Such an increase in
the number of effective cone types can reduce the prevalence of metamers--pairs
of distinct spectra that resolve to the same tristimulus values. This technique
may result in an enhancement of spectral perception, with applications ranging
from camouflage detection and anti-counterfeiting to new types of artwork and
data visualization.
| null |
http://arxiv.org/abs/1703.04392v3
|
http://arxiv.org/pdf/1703.04392v3.pdf
| null |
[
"Bradley S. Gundlach",
"Michel Frising",
"Alireza Shahsafi",
"Gregory Vershbow",
"Chenghao Wan",
"Jad Salman",
"Bas Rokers",
"Laurent Lessard",
"Mikhail A. Kats"
] |
[
"Data Visualization"
] | 2017-03-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/navigating-with-graph-representations-for
|
1806.04189
| null | null |
Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models
|
Neural language models (NLMs) have recently gained a renewed interest by
achieving state-of-the-art performance across many natural language processing
(NLP) tasks. However, NLMs are very computationally demanding largely due to
the computational cost of the softmax layer over a large vocabulary. We observe
that, in decoding of many NLP tasks, only the probabilities of the top-K
hypotheses need to be calculated preciously and K is often much smaller than
the vocabulary size. This paper proposes a novel softmax layer approximation
algorithm, called Fast Graph Decoder (FGD), which quickly identifies, for a
given context, a set of K words that are most likely to occur according to a
NLM. We demonstrate that FGD reduces the decoding time by an order of magnitude
while attaining close to the full softmax baseline accuracy on neural machine
translation and language modeling tasks. We also prove the theoretical
guarantee on the softmax approximation quality.
| null |
http://arxiv.org/abs/1806.04189v1
|
http://arxiv.org/pdf/1806.04189v1.pdf
|
NeurIPS 2018 12
|
[
"Minjia Zhang",
"Xiaodong Liu",
"Wenhan Wang",
"Jianfeng Gao",
"Yuxiong He"
] |
[
"Decoder",
"Language Modeling",
"Language Modelling",
"Machine Translation",
"Translation"
] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/7868-navigating-with-graph-representations-for-fast-and-scalable-decoding-of-neural-language-models
|
http://papers.nips.cc/paper/7868-navigating-with-graph-representations-for-fast-and-scalable-decoding-of-neural-language-models.pdf
|
navigating-with-graph-representations-for-1
| null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-corpus-with-multi-level-annotations-of
|
1806.04185
| null | null |
A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature
|
We present a corpus of 5,000 richly annotated abstracts of medical articles
describing clinical randomized controlled trials. Annotations include
demarcations of text spans that describe the Patient population enrolled, the
Interventions studied and to what they were Compared, and the Outcomes measured
(the `PICO' elements). These spans are further annotated at a more granular
level, e.g., individual interventions within them are marked and mapped onto a
structured medical vocabulary. We acquired annotations from a diverse set of
workers with varying levels of expertise and cost. We describe our data
collection process and the corpus itself in detail. We then outline a set of
challenging NLP tasks that would aid searching of the medical literature and
the practice of evidence-based medicine.
|
We present a corpus of 5, 000 richly annotated abstracts of medical articles describing clinical randomized controlled trials.
|
http://arxiv.org/abs/1806.04185v1
|
http://arxiv.org/pdf/1806.04185v1.pdf
|
ACL 2018 7
|
[
"Benjamin Nye",
"Junyi Jessy Li",
"Roma Patel",
"Yinfei Yang",
"Iain J. Marshall",
"Ani Nenkova",
"Byron C. Wallace"
] |
[
"Articles",
"Participant Intervention Comparison Outcome Extraction",
"PICO"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/P18-1019
|
https://aclanthology.org/P18-1019.pdf
|
a-corpus-with-multi-level-annotations-of-1
| null |
[] |
https://paperswithcode.com/paper/synthetic-depth-of-field-with-a-single-camera
|
1806.04171
| null | null |
Synthetic Depth-of-Field with a Single-Camera Mobile Phone
|
Shallow depth-of-field is commonly used by photographers to isolate a subject
from a distracting background. However, standard cell phone cameras cannot
produce such images optically, as their short focal lengths and small apertures
capture nearly all-in-focus images. We present a system to computationally
synthesize shallow depth-of-field images with a single mobile camera and a
single button press. If the image is of a person, we use a person segmentation
network to separate the person and their accessories from the background. If
available, we also use dense dual-pixel auto-focus hardware, effectively a
2-sample light field with an approximately 1 millimeter baseline, to compute a
dense depth map. These two signals are combined and used to render a defocused
image. Our system can process a 5.4 megapixel image in 4 seconds on a mobile
phone, is fully automatic, and is robust enough to be used by non-experts. The
modular nature of our system allows it to degrade naturally in the absence of a
dual-pixel sensor or a human subject.
|
Shallow depth-of-field is commonly used by photographers to isolate a subject from a distracting background.
|
http://arxiv.org/abs/1806.04171v1
|
http://arxiv.org/pdf/1806.04171v1.pdf
| null |
[
"Neal Wadhwa",
"Rahul Garg",
"David E. Jacobs",
"Bryan E. Feldman",
"Nori Kanazawa",
"Robert Carroll",
"Yair Movshovitz-Attias",
"Jonathan T. Barron",
"Yael Pritch",
"Marc Levoy"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/defense-against-the-dark-arts-an-overview-of
|
1806.04169
| null | null |
Defense Against the Dark Arts: An overview of adversarial example security research and future research directions
|
This article presents a summary of a keynote lecture at the Deep Learning
Security workshop at IEEE Security and Privacy 2018. This lecture summarizes
the state of the art in defenses against adversarial examples and provides
recommendations for future research directions on this topic.
| null |
http://arxiv.org/abs/1806.04169v1
|
http://arxiv.org/pdf/1806.04169v1.pdf
| null |
[
"Ian Goodfellow"
] |
[
"Deep Learning"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/straight-to-the-tree-constituency-parsing
|
1806.04168
| null | null |
Straight to the Tree: Constituency Parsing with Neural Syntactic Distance
|
In this work, we propose a novel constituency parsing scheme. The model
predicts a vector of real-valued scalars, named syntactic distances, for each
split position in the input sentence. The syntactic distances specify the order
in which the split points will be selected, recursively partitioning the input,
in a top-down fashion. Compared to traditional shift-reduce parsing schemes,
our approach is free from the potential problem of compounding errors, while
being faster and easier to parallelize. Our model achieves competitive
performance amongst single model, discriminative parsers in the PTB dataset and
outperforms previous models in the CTB dataset.
|
In this work, we propose a novel constituency parsing scheme.
|
http://arxiv.org/abs/1806.04168v1
|
http://arxiv.org/pdf/1806.04168v1.pdf
|
ACL 2018 7
|
[
"Yikang Shen",
"Zhouhan Lin",
"Athul Paul Jacob",
"Alessandro Sordoni",
"Aaron Courville",
"Yoshua Bengio"
] |
[
"Constituency Parsing",
"Position",
"Sentence"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/P18-1108
|
https://aclanthology.org/P18-1108.pdf
|
straight-to-the-tree-constituency-parsing-1
| null |
[] |
https://paperswithcode.com/paper/learning-an-approximate-model-predictive
|
1806.04167
| null | null |
Learning an Approximate Model Predictive Controller with Guarantees
|
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.
| null |
http://arxiv.org/abs/1806.04167v1
|
http://arxiv.org/pdf/1806.04167v1.pdf
| null |
[
"Michael Hertneck",
"Johannes Köhler",
"Sebastian Trimpe",
"Frank Allgöwer"
] |
[
"model"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gesture-based-bootstrapping-for-egocentric
|
1612.02889
| null | null |
Gesture-based Bootstrapping for Egocentric Hand Segmentation
|
Accurately identifying hands in images is a key sub-task for human activity
understanding with wearable first-person point-of-view cameras. Traditional
hand segmentation approaches rely on a large corpus of manually labeled data to
generate robust hand detectors. However, these approaches still face challenges
as the appearance of the hand varies greatly across users, tasks, environments
or illumination conditions. A key observation in the case of many wearable
applications and interfaces is that, it is only necessary to accurately detect
the user's hands in a specific situational context. Based on this observation,
we introduce an interactive approach to learn a person-specific hand
segmentation model that does not require any manually labeled training data.
Our approach proceeds in two steps, an interactive bootstrapping step for
identifying moving hand regions, followed by learning a personalized user
specific hand appearance model. Concretely, our approach uses two convolutional
neural networks: (1) a gesture network that uses pre-defined motion information
to detect the hand region; and (2) an appearance network that learns a person
specific model of the hand region based on the output of the gesture network.
During training, to make the appearance network robust to errors in the gesture
network, the loss function of the former network incorporates the confidence of
the gesture network while learning. Experiments demonstrate the robustness of
our approach with an F1 score over 0.8 on all challenging datasets across a
wide range of illumination and hand appearance variations, improving over a
baseline approach by over 10%.
| null |
http://arxiv.org/abs/1612.02889v2
|
http://arxiv.org/pdf/1612.02889v2.pdf
| null |
[
"Yubo Zhang",
"Vishnu Naresh Boddeti",
"Kris M. Kitani"
] |
[
"Hand Segmentation"
] | 2016-12-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-to-decompose-and-disentangle
|
1806.04166
| null | null |
Learning to Decompose and Disentangle Representations for Video Prediction
|
Our goal is to predict future video frames given a sequence of input frames.
Despite large amounts of video data, this remains a challenging task because of
the high-dimensionality of video frames. We address this challenge by proposing
the Decompositional Disentangled Predictive Auto-Encoder (DDPAE), a framework
that combines structured probabilistic models and deep networks to
automatically (i) decompose the high-dimensional video that we aim to predict
into components, and (ii) disentangle each component to have low-dimensional
temporal dynamics that are easier to predict. Crucially, with an appropriately
specified generative model of video frames, our DDPAE is able to learn both the
latent decomposition and disentanglement without explicit supervision. For the
Moving MNIST dataset, we show that DDPAE is able to recover the underlying
components (individual digits) and disentanglement (appearance and location) as
we would intuitively do. We further demonstrate that DDPAE can be applied to
the Bouncing Balls dataset involving complex interactions between multiple
objects to predict the video frame directly from the pixels and recover
physical states without explicit supervision.
|
Our goal is to predict future video frames given a sequence of input frames.
|
http://arxiv.org/abs/1806.04166v2
|
http://arxiv.org/pdf/1806.04166v2.pdf
|
NeurIPS 2018 12
|
[
"Jun-Ting Hsieh",
"Bingbin Liu",
"De-An Huang",
"Li Fei-Fei",
"Juan Carlos Niebles"
] |
[
"Disentanglement",
"Predict Future Video Frames",
"Video Prediction"
] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/7333-learning-to-decompose-and-disentangle-representations-for-video-prediction
|
http://papers.nips.cc/paper/7333-learning-to-decompose-and-disentangle-representations-for-video-prediction.pdf
|
learning-to-decompose-and-disentangle-1
| null |
[] |
https://paperswithcode.com/paper/roto-translation-covariant-convolutional
|
1804.03393
| null | null |
Roto-Translation Covariant Convolutional Networks for Medical Image Analysis
|
We propose a framework for rotation and translation covariant deep learning
using $SE(2)$ group convolutions. The group product of the special Euclidean
motion group $SE(2)$ describes how a concatenation of two roto-translations
results in a net roto-translation. We encode this geometric structure into
convolutional neural networks (CNNs) via $SE(2)$ group convolutional layers,
which fit into the standard 2D CNN framework, and which allow to generically
deal with rotated input samples without the need for data augmentation.
We introduce three layers: a lifting layer which lifts a 2D (vector valued)
image to an $SE(2)$-image, i.e., 3D (vector valued) data whose domain is
$SE(2)$; a group convolution layer from and to an $SE(2)$-image; and a
projection layer from an $SE(2)$-image to a 2D image. The lifting and group
convolution layers are $SE(2)$ covariant (the output roto-translates with the
input). The final projection layer, a maximum intensity projection over
rotations, makes the full CNN rotation invariant.
We show with three different problems in histopathology, retinal imaging, and
electron microscopy that with the proposed group CNNs, state-of-the-art
performance can be achieved, without the need for data augmentation by rotation
and with increased performance compared to standard CNNs that do rely on
augmentation.
|
We propose a framework for rotation and translation covariant deep learning using $SE(2)$ group convolutions.
|
http://arxiv.org/abs/1804.03393v3
|
http://arxiv.org/pdf/1804.03393v3.pdf
| null |
[
"Erik J. Bekkers",
"Maxime W. Lafarge",
"Mitko Veta",
"Koen AJ Eppenhof",
"Josien PW Pluim",
"Remco Duits"
] |
[
"Data Augmentation",
"Medical Image Analysis",
"Translation"
] | 2018-04-10T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/finding-syntax-in-human-encephalography-with
|
1806.04127
| null | null |
Finding Syntax in Human Encephalography with Beam Search
|
Recurrent neural network grammars (RNNGs) are generative models of
(tree,string) pairs that rely on neural networks to evaluate derivational
choices. Parsing with them using beam search yields a variety of incremental
complexity metrics such as word surprisal and parser action count. When used as
regressors against human electrophysiological responses to naturalistic text,
they derive two amplitude effects: an early peak and a P600-like later peak. By
contrast, a non-syntactic neural language model yields no reliable effects.
Model comparisons attribute the early peak to syntactic composition within the
RNNG. This pattern of results recommends the RNNG+beam search combination as a
mechanistic model of the syntactic processing that occurs during normal human
language comprehension.
| null |
http://arxiv.org/abs/1806.04127v1
|
http://arxiv.org/pdf/1806.04127v1.pdf
|
ACL 2018 7
|
[
"John Hale",
"Chris Dyer",
"Adhiguna Kuncoro",
"Jonathan R. Brennan"
] |
[
"Attribute",
"Language Modeling",
"Language Modelling"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/P18-1254
|
https://aclanthology.org/P18-1254.pdf
|
finding-syntax-in-human-encephalography-with-1
| null |
[] |
https://paperswithcode.com/paper/evaluating-robustness-of-neural-networks-with
|
1711.07356
| null |
HyGIdiRqtm
|
Evaluating Robustness of Neural Networks with Mixed Integer Programming
|
Neural networks have demonstrated considerable success on a wide variety of
real-world problems. However, networks trained only to optimize for training
accuracy can often be fooled by adversarial examples - slightly perturbed
inputs that are misclassified with high confidence. Verification of networks
enables us to gauge their vulnerability to such adversarial examples. We
formulate verification of piecewise-linear neural networks as a mixed integer
program. On a representative task of finding minimum adversarial distortions,
our verifier is two to three orders of magnitude quicker than the
state-of-the-art. We achieve this computational speedup via tight formulations
for non-linearities, as well as a novel presolve algorithm that makes full use
of all information available. The computational speedup allows us to verify
properties on convolutional networks with an order of magnitude more ReLUs than
networks previously verified by any complete verifier. In particular, we
determine for the first time the exact adversarial accuracy of an MNIST
classifier to perturbations with bounded $l_\infty$ norm $\epsilon=0.1$: for
this classifier, we find an adversarial example for 4.38% of samples, and a
certificate of robustness (to perturbations with bounded norm) for the
remainder. Across all robust training procedures and network architectures
considered, we are able to certify more samples than the state-of-the-art and
find more adversarial examples than a strong first-order attack.
|
The computational speedup allows us to verify properties on convolutional networks with an order of magnitude more ReLUs than networks previously verified by any complete verifier.
|
http://arxiv.org/abs/1711.07356v3
|
http://arxiv.org/pdf/1711.07356v3.pdf
|
ICLR 2019 5
|
[
"Vincent Tjeng",
"Kai Xiao",
"Russ Tedrake"
] |
[] | 2017-11-20T00:00:00 |
https://openreview.net/forum?id=HyGIdiRqtm
|
https://openreview.net/pdf?id=HyGIdiRqtm
|
evaluating-robustness-of-neural-networks-with-1
| null |
[] |
https://paperswithcode.com/paper/constructing-datasets-for-multi-hop-reading
|
1710.06481
| null | null |
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
|
Most Reading Comprehension methods limit themselves to queries which can be
answered using a single sentence, paragraph, or document. Enabling models to
combine disjoint pieces of textual evidence would extend the scope of machine
comprehension methods, but currently there exist no resources to train and test
this capability. We propose a novel task to encourage the development of models
for text understanding across multiple documents and to investigate the limits
of existing methods. In our task, a model learns to seek and combine evidence -
effectively performing multi-hop (alias multi-step) inference. We devise a
methodology to produce datasets for this task, given a collection of
query-answer pairs and thematically linked documents. Two datasets from
different domains are induced, and we identify potential pitfalls and devise
circumvention strategies. We evaluate two previously proposed competitive
models and find that one can integrate information across documents. However,
both models struggle to select relevant information, as providing documents
guaranteed to be relevant greatly improves their performance. While the models
outperform several strong baselines, their best accuracy reaches 42.9% compared
to human performance at 74.0% - leaving ample room for improvement.
| null |
http://arxiv.org/abs/1710.06481v2
|
http://arxiv.org/pdf/1710.06481v2.pdf
|
TACL 2018 1
|
[
"Johannes Welbl",
"Pontus Stenetorp",
"Sebastian Riedel"
] |
[
"Multi-Hop Reading Comprehension",
"Reading Comprehension",
"Sentence"
] | 2017-10-17T00:00:00 |
https://aclanthology.org/Q18-1021
|
https://aclanthology.org/Q18-1021.pdf
|
constructing-datasets-for-multi-hop-reading-1
| null |
[] |
https://paperswithcode.com/paper/deep-convolutional-neural-networks-for-brain
|
1712.03747
| null | null |
Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review
|
In recent years, deep convolutional neural networks (CNNs) have shown
record-shattering performance in a variety of computer vision problems, such as
visual object recognition, detection and segmentation. These methods have also
been utilised in medical image analysis domain for lesion segmentation,
anatomical segmentation and classification. We present an extensive literature
review of CNN techniques applied in brain magnetic resonance imaging (MRI)
analysis, focusing on the architectures, pre-processing, data-preparation and
post-processing strategies available in these works. The aim of this study is
three-fold. Our primary goal is to report how different CNN architectures have
evolved, discuss state-of-the-art strategies, condense their results obtained
using public datasets and examine their pros and cons. Second, this paper is
intended to be a detailed reference of the research activity in deep CNN for
brain MRI analysis. Finally, we present a perspective on the future of CNNs in
which we hint some of the research directions in subsequent years.
| null |
http://arxiv.org/abs/1712.03747v3
|
http://arxiv.org/pdf/1712.03747v3.pdf
| null |
[
"Jose Bernal",
"Kaisar Kushibar",
"Daniel S. Asfaw",
"Sergi Valverde",
"Arnau Oliver",
"Robert Martí",
"Xavier Lladó"
] |
[
"Lesion Segmentation",
"Medical Image Analysis",
"Object Recognition",
"Segmentation"
] | 2017-12-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/atomo-communication-efficient-learning-via
|
1806.04090
| null | null |
ATOMO: Communication-efficient Learning via Atomic Sparsification
|
Distributed model training suffers from communication overheads due to
frequent gradient updates transmitted between compute nodes. To mitigate these
overheads, several studies propose the use of sparsified stochastic gradients.
We argue that these are facets of a general sparsification method that can
operate on any possible atomic decomposition. Notable examples include
element-wise, singular value, and Fourier decompositions. We present ATOMO, a
general framework for atomic sparsification of stochastic gradients. Given a
gradient, an atomic decomposition, and a sparsity budget, ATOMO gives a random
unbiased sparsification of the atoms minimizing variance. We show that recent
methods such as QSGD and TernGrad are special cases of ATOMO and that
sparsifiying the singular value decomposition of neural networks gradients,
rather than their coordinates, can lead to significantly faster distributed
training.
|
We present ATOMO, a general framework for atomic sparsification of stochastic gradients.
|
http://arxiv.org/abs/1806.04090v3
|
http://arxiv.org/pdf/1806.04090v3.pdf
|
NeurIPS 2018 12
|
[
"Hongyi Wang",
"Scott Sievert",
"Zachary Charles",
"Shengchao Liu",
"Stephen Wright",
"Dimitris Papailiopoulos"
] |
[] | 2018-06-11T00:00:00 |
http://papers.nips.cc/paper/8191-atomo-communication-efficient-learning-via-atomic-sparsification
|
http://papers.nips.cc/paper/8191-atomo-communication-efficient-learning-via-atomic-sparsification.pdf
|
atomo-communication-efficient-learning-via-1
| null |
[] |
https://paperswithcode.com/paper/the-research-of-the-real-time-detection-and
|
1806.04070
| null | null |
The Research of the Real-time Detection and Recognition of Targets in Streetscape Videos
|
This study proposes a method for the real-time detection and recognition of
targets in streetscape videos. The proposed method is based on separation
confidence computation and scale synthesis optimization. We use the proposed
method to detect and recognize targets in streetscape videos with high frame
rates and high definition. Furthermore, we experimentally demonstrate that the
accuracy and robustness of our proposed method are superior to those of
conventional methods.
| null |
http://arxiv.org/abs/1806.04070v1
|
http://arxiv.org/pdf/1806.04070v1.pdf
| null |
[
"Liu Jian-min"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-co-matching-model-for-multi-choice-reading
|
1806.04068
| null | null |
A Co-Matching Model for Multi-choice Reading Comprehension
|
Multi-choice reading comprehension is a challenging task, which involves the
matching between a passage and a question-answer pair. This paper proposes a
new co-matching approach to this problem, which jointly models whether a
passage can match both a question and a candidate answer. Experimental results
on the RACE dataset demonstrate that our approach achieves state-of-the-art
performance.
|
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair.
|
http://arxiv.org/abs/1806.04068v1
|
http://arxiv.org/pdf/1806.04068v1.pdf
|
ACL 2018 7
|
[
"Shuohang Wang",
"Mo Yu",
"Shiyu Chang",
"Jing Jiang"
] |
[
"Reading Comprehension"
] | 2018-06-11T00:00:00 |
https://aclanthology.org/P18-2118
|
https://aclanthology.org/P18-2118.pdf
|
a-co-matching-model-for-multi-choice-reading-1
| null |
[] |
https://paperswithcode.com/paper/adaptive-mechanism-design-learning-to-promote
|
1806.04067
| null | null |
Adaptive Mechanism Design: Learning to Promote Cooperation
|
In the future, artificial learning agents are likely to become increasingly widespread in our society. They will interact with both other learning agents and humans in a variety of complex settings including social dilemmas. We consider the problem of how an external agent can promote cooperation between artificial learners by distributing additional rewards and punishments based on observing the learners' actions. We propose a rule for automatically learning how to create right incentives by considering the players' anticipated parameter updates. Using this learning rule leads to cooperation with high social welfare in matrix games in which the agents would otherwise learn to defect with high probability. We show that the resulting cooperative outcome is stable in certain games even if the planning agent is turned off after a given number of episodes, while other games require ongoing intervention to maintain mutual cooperation. However, even in the latter case, the amount of necessary additional incentives decreases over time.
|
In the future, artificial learning agents are likely to become increasingly widespread in our society.
|
https://arxiv.org/abs/1806.04067v2
|
https://arxiv.org/pdf/1806.04067v2.pdf
| null |
[
"Tobias Baumann",
"Thore Graepel",
"John Shawe-Taylor"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/joint-learning-of-motion-estimation-and
|
1806.04066
| null | null |
Joint Learning of Motion Estimation and Segmentation for Cardiac MR Image Sequences
|
Cardiac motion estimation and segmentation play important roles in
quantitatively assessing cardiac function and diagnosing cardiovascular
diseases. In this paper, we propose a novel deep learning method for joint
estimation of motion and segmentation from cardiac MR image sequences. The
proposed network consists of two branches: a cardiac motion estimation branch
which is built on a novel unsupervised Siamese style recurrent spatial
transformer network, and a cardiac segmentation branch that is based on a fully
convolutional network. In particular, a joint multi-scale feature encoder is
learned by optimizing the segmentation branch and the motion estimation branch
simultaneously. This enables the weakly-supervised segmentation by taking
advantage of features that are unsupervisedly learned in the motion estimation
branch from a large amount of unannotated data. Experimental results using
cardiac MRI images from 220 subjects show that the joint learning of both tasks
is complementary and the proposed models outperform the competing methods
significantly in terms of accuracy and speed.
|
Cardiac motion estimation and segmentation play important roles in quantitatively assessing cardiac function and diagnosing cardiovascular diseases.
|
http://arxiv.org/abs/1806.04066v1
|
http://arxiv.org/pdf/1806.04066v1.pdf
| null |
[
"Chen Qin",
"Wenjia Bai",
"Jo Schlemper",
"Steffen E. Petersen",
"Stefan K. Piechnik",
"Stefan Neubauer",
"Daniel Rueckert"
] |
[
"Cardiac Segmentation",
"Motion Estimation",
"Segmentation",
"Weakly supervised segmentation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/which-training-methods-for-gans-do-actually
|
1801.04406
| null | null |
Which Training Methods for GANs do actually Converge?
|
Recent work has shown local convergence of GAN training for absolutely
continuous data and generator distributions. In this paper, we show that the
requirement of absolute continuity is necessary: we describe a simple yet
prototypical counterexample showing that in the more realistic case of
distributions that are not absolutely continuous, unregularized GAN training is
not always convergent. Furthermore, we discuss regularization strategies that
were recently proposed to stabilize GAN training. Our analysis shows that GAN
training with instance noise or zero-centered gradient penalties converges. On
the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number
of discriminator updates per generator update do not always converge to the
equilibrium point. We discuss these results, leading us to a new explanation
for the stability problems of GAN training. Based on our analysis, we extend
our convergence results to more general GANs and prove local convergence for
simplified gradient penalties even if the generator and data distribution lie
on lower dimensional manifolds. We find these penalties to work well in
practice and use them to learn high-resolution generative image models for a
variety of datasets with little hyperparameter tuning.
|
In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent.
|
http://arxiv.org/abs/1801.04406v4
|
http://arxiv.org/pdf/1801.04406v4.pdf
|
ICML 2018 7
|
[
"Lars Mescheder",
"Andreas Geiger",
"Sebastian Nowozin"
] |
[] | 2018-01-13T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1900
|
http://proceedings.mlr.press/v80/mescheder18a/mescheder18a.pdf
|
which-training-methods-for-gans-do-actually-1
| null |
[
{
"code_snippet_url": "https://github.com/ChristophReich1996/Dirac-GAN/blob/decb8283d919640057c50ff5a1ba01b93ed86332/dirac_gan/loss.py#L292",
"description": "**R_INLINE_MATH_1 Regularization** is a regularization technique and gradient penalty for training [generative adversarial networks](https://paperswithcode.com/methods/category/generative-adversarial-networks). It penalizes the discriminator from deviating from the Nash Equilibrium via penalizing the gradient on real data alone: when the generator distribution produces the true data distribution and the discriminator is equal to 0 on the data manifold, the gradient penalty ensures that the discriminator cannot create a non-zero gradient orthogonal to the data manifold without suffering a loss in the [GAN](https://paperswithcode.com/method/gan) game.\r\n\r\nThis leads to the following regularization term:\r\n\r\n$$ R\\_{1}\\left(\\psi\\right) = \\frac{\\gamma}{2}E\\_{p\\_{D}\\left(x\\right)}\\left[||\\nabla{D\\_{\\psi}\\left(x\\right)}||^{2}\\right] $$",
"full_name": "R1 Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "R1 Regularization",
"source_title": "Which Training Methods for GANs do actually Converge?",
"source_url": "http://arxiv.org/abs/1801.04406v4"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/efficient-model-based-deep-reinforcement
|
1802.04325
| null | null |
Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation
|
Modern reinforcement learning algorithms reach super-human performance on
many board and video games, but they are sample inefficient, i.e. they
typically require significantly more playing experience than humans to reach an
equal performance level. To improve sample efficiency, an agent may build a
model of the environment and use planning methods to update its policy. In this
article we introduce Variational State Tabulation (VaST), which maps an
environment with a high-dimensional state space (e.g. the space of visual
inputs) to an abstract tabular model. Prioritized sweeping with small backups,
a highly efficient planning method, can then be used to update state-action
values. We show how VaST can rapidly learn to maximize reward in tasks like 3D
navigation and efficiently adapt to sudden changes in rewards or transition
probabilities.
|
Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i. e. they typically require significantly more playing experience than humans to reach an equal performance level.
|
http://arxiv.org/abs/1802.04325v2
|
http://arxiv.org/pdf/1802.04325v2.pdf
|
ICML 2018 7
|
[
"Dane Corneil",
"Wulfram Gerstner",
"Johanni Brea"
] |
[
"Deep Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-02-12T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2242
|
http://proceedings.mlr.press/v80/corneil18a/corneil18a.pdf
|
efficient-model-based-deep-reinforcement-1
| null |
[
{
"code_snippet_url": null,
"description": "**Prioritized Sweeping** is a reinforcement learning technique for model-based algorithms that prioritizes updates according to a measure of urgency, and performs these updates first. A queue is maintained of every state-action pair whose estimated value would change nontrivially if updated, prioritized by the size of the change. When the top pair in the queue is updated, the effect on each of its predecessor pairs is computed. If the effect is greater than some threshold, then the pair is inserted in the queue with the new priority.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Prioritized Sweeping",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Efficient Planning",
"parent": null
},
"name": "Prioritized Sweeping",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/ct-realistic-lung-nodule-simulation-from-3d
|
1806.04051
| null | null |
CT-Realistic Lung Nodule Simulation from 3D Conditional Generative Adversarial Networks for Robust Lung Segmentation
|
Data availability plays a critical role for the performance of deep learning
systems. This challenge is especially acute within the medical image domain,
particularly when pathologies are involved, due to two factors: 1) limited
number of cases, and 2) large variations in location, scale, and appearance. In
this work, we investigate whether augmenting a dataset with artificially
generated lung nodules can improve the robustness of the progressive
holistically nested network (P-HNN) model for pathological lung segmentation of
CT scans. To achieve this goal, we develop a 3D generative adversarial network
(GAN) that effectively learns lung nodule property distributions in 3D space.
In order to embed the nodules within their background context, we condition the
GAN based on a volume of interest whose central part containing the nodule has
been erased. To further improve realism and blending with the background, we
propose a novel multi-mask reconstruction loss. We train our method on over
1000 nodules from the LIDC dataset. Qualitative results demonstrate the
effectiveness of our method compared to the state-of-art. We then use our GAN
to generate simulated training images where nodules lie on the lung border,
which are cases where the published P-HNN model struggles. Qualitative and
quantitative results demonstrate that armed with these simulated images, the
P-HNN model learns to better segment lung regions under these challenging
situations. As a result, our system provides a promising means to help overcome
the data paucity that commonly afflicts medical imaging.
| null |
http://arxiv.org/abs/1806.04051v1
|
http://arxiv.org/pdf/1806.04051v1.pdf
| null |
[
"Dakai Jin",
"Ziyue Xu",
"You-Bao Tang",
"Adam P. Harrison",
"Daniel J. Mollura"
] |
[
"Generative Adversarial Network"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-denoising-of-signals-with-shift
|
1806.04028
| null | null |
Adaptive Denoising of Signals with Local Shift-Invariant Structure
|
We discuss the problem of adaptive discrete-time signal denoising in the situation where the signal to be recovered admits a "linear oracle" -- an unknown linear estimate that takes the form of convolution of observations with a time-invariant filter. It was shown by Juditsky and Nemirovski (2009) that when the $\ell_2$-norm of the oracle filter is small enough, such oracle can be "mimicked" by an efficiently computable adaptive estimate of the same structure with an observation-driven filter. The filter in question was obtained as a solution to the optimization problem in which the $\ell_\infty$-norm of the Discrete Fourier Transform (DFT) of the estimation residual is minimized under constraint on the $\ell_1$-norm of the filter DFT. In this paper, we discuss a new family of adaptive estimates which rely upon minimizing the $\ell_2$-norm of the estimation residual. We show that such estimators possess better statistical properties than those based on $\ell_\infty$-fit; in particular, we prove oracle inequalities for their $\ell_2$-loss and improved bounds for $\ell_2$- and pointwise losses. The oracle inequalities rely on the "approximate shift-invariance" assumption stating that the signal to be recovered is close to an (unknown) shift-invariant subspace. We also study the relationship of the approximate shift-invariance assumption with the "signal simplicity" assumption introduced in Juditsky and Nemirovski (2009) and discuss the application of the proposed approach to harmonic oscillations denoising.
| null |
https://arxiv.org/abs/1806.04028v2
|
https://arxiv.org/pdf/1806.04028v2.pdf
| null |
[
"Zaid Harchaoui",
"Anatoli Juditsky",
"Arkadi Nemirovski",
"Dmitrii Ostrovskii"
] |
[
"Denoising"
] | 2018-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/baselines-and-a-datasheet-for-the-cerema-awp
|
1806.04016
| null | null |
Baselines and a datasheet for the Cerema AWP dataset
|
This paper presents the recently published Cerema AWP (Adverse Weather
Pedestrian) dataset for various machine learning tasks and its exports in
machine learning friendly format. We explain why this dataset can be
interesting (mainly because it is a greatly controlled and fully annotated
image dataset) and present baseline results for various tasks. Moreover, we
decided to follow the very recent suggestions of datasheets for dataset, trying
to standardize all the available information of the dataset, with a
transparency objective.
| null |
http://arxiv.org/abs/1806.04016v1
|
http://arxiv.org/pdf/1806.04016v1.pdf
| null |
[
"Ismaïla Seck",
"Khouloud Dahmane",
"Pierre Duthon",
"Gaëlle Loosli"
] |
[
"BIG-bench Machine Learning"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/turning-your-weakness-into-a-strength
|
1802.04633
| null | null |
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
|
Deep Neural Networks have recently gained lots of success after enabling
several breakthroughs in notoriously challenging problems. Training these
networks is computationally expensive and requires vast amounts of training
data. Selling such pre-trained models can, therefore, be a lucrative business
model. Unfortunately, once the models are sold they can be easily copied and
redistributed. To avoid this, a tracking mechanism to identify models as the
intellectual property of a particular vendor is necessary.
In this work, we present an approach for watermarking Deep Neural Networks in
a black-box way. Our scheme works for general classification tasks and can
easily be combined with current learning algorithms. We show experimentally
that such a watermark has no noticeable impact on the primary task that the
model is designed for and evaluate the robustness of our proposal against a
multitude of practical attacks. Moreover, we provide a theoretical analysis,
relating our approach to previous work on backdooring.
|
Unfortunately, once the models are sold they can be easily copied and redistributed.
|
http://arxiv.org/abs/1802.04633v3
|
http://arxiv.org/pdf/1802.04633v3.pdf
| null |
[
"Yossi Adi",
"Carsten Baum",
"Moustapha Cisse",
"Benny Pinkas",
"Joseph Keshet"
] |
[
"General Classification"
] | 2018-02-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/relational-inductive-biases-deep-learning-and
|
1806.01261
| null | null |
Relational inductive biases, deep learning, and graph networks
|
Artificial intelligence (AI) has undergone a renaissance recently, making
major progress in key domains such as vision, language, control, and
decision-making. This has been due, in part, to cheap data and cheap compute
resources, which have fit the natural strengths of deep learning. However, many
defining characteristics of human intelligence, which developed under much
different pressures, remain out of reach for current approaches. In particular,
generalizing beyond one's experiences--a hallmark of human intelligence from
infancy--remains a formidable challenge for modern AI.
The following is part position paper, part review, and part unification. We
argue that combinatorial generalization must be a top priority for AI to
achieve human-like abilities, and that structured representations and
computations are key to realizing this objective. Just as biology uses nature
and nurture cooperatively, we reject the false choice between
"hand-engineering" and "end-to-end" learning, and instead advocate for an
approach which benefits from their complementary strengths. We explore how
using relational inductive biases within deep learning architectures can
facilitate learning about entities, relations, and rules for composing them. We
present a new building block for the AI toolkit with a strong relational
inductive bias--the graph network--which generalizes and extends various
approaches for neural networks that operate on graphs, and provides a
straightforward interface for manipulating structured knowledge and producing
structured behaviors. We discuss how graph networks can support relational
reasoning and combinatorial generalization, laying the foundation for more
sophisticated, interpretable, and flexible patterns of reasoning. As a
companion to this paper, we have released an open-source software library for
building graph networks, with demonstrations of how to use them in practice.
|
As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.
|
http://arxiv.org/abs/1806.01261v3
|
http://arxiv.org/pdf/1806.01261v3.pdf
| null |
[
"Peter W. Battaglia",
"Jessica B. Hamrick",
"Victor Bapst",
"Alvaro Sanchez-Gonzalez",
"Vinicius Zambaldi",
"Mateusz Malinowski",
"Andrea Tacchetti",
"David Raposo",
"Adam Santoro",
"Ryan Faulkner",
"Caglar Gulcehre",
"Francis Song",
"Andrew Ballard",
"Justin Gilmer",
"George Dahl",
"Ashish Vaswani",
"Kelsey Allen",
"Charles Nash",
"Victoria Langston",
"Chris Dyer",
"Nicolas Heess",
"Daan Wierstra",
"Pushmeet Kohli",
"Matt Botvinick",
"Oriol Vinyals",
"Yujia Li",
"Razvan Pascanu"
] |
[
"Decision Making",
"Deep Learning",
"Inductive Bias",
"Relational Reasoning"
] | 2018-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/prosody-modifications-for-question-answering
|
1806.03957
| null | null |
Prosody Modifications for Question-Answering in Voice-Only Settings
|
Many popular form factors of digital assistants---such as Amazon Echo, Apple Homepod, or Google Home---enable the user to hold a conversation with these systems based only on the speech modality. The lack of a screen presents unique challenges. To satisfy the information need of a user, the presentation of the answer needs to be optimized for such voice-only interactions. In this paper, we propose a task of evaluating the usefulness of audio transformations (i.e., prosodic modifications) for voice-only question answering. We introduce a crowdsourcing setup where we evaluate the quality of our proposed modifications along multiple dimensions corresponding to the informativeness, naturalness, and ability of the user to identify key parts of the answer. We offer a set of prosodic modifications that highlight potentially important parts of the answer using various acoustic cues. Our experiments show that some of these prosodic modifications lead to better comprehension at the expense of only slightly degraded naturalness of the audio.
|
Many popular form factors of digital assistants---such as Amazon Echo, Apple Homepod, or Google Home---enable the user to hold a conversation with these systems based only on the speech modality.
|
https://arxiv.org/abs/1806.03957v4
|
https://arxiv.org/pdf/1806.03957v4.pdf
| null |
[
"Aleksandr Chuklin",
"Aliaksei Severyn",
"Johanne Trippas",
"Enrique Alfonseca",
"Hanna Silen",
"Damiano Spina"
] |
[
"Informativeness",
"Question Answering"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/psgan-a-generative-adversarial-network-for
|
1805.03371
| null | null |
PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening
|
This paper addresses the problem of remote sensing image pan-sharpening from the perspective of generative adversarial learning. We propose a novel deep neural network based method named PSGAN. To the best of our knowledge, this is one of the first attempts at producing high-quality pan-sharpened images with GANs. The PSGAN consists of two components: a generative network (i.e., generator) and a discriminative network (i.e., discriminator). The generator is designed to accept panchromatic (PAN) and multispectral (MS) images as inputs and maps them to the desired high-resolution (HR) MS images and the discriminator implements the adversarial training strategy for generating higher fidelity pan-sharpened images. In this paper, we evaluate several architectures and designs, namely two-stream input, stacking input, batch normalization layer, and attention mechanism to find the optimal solution for pan-sharpening. Extensive experiments on QuickBird, GaoFen-2, and WorldView-2 satellite images demonstrate that the proposed PSGANs not only are effective in generating high-quality HR MS images and superior to state-of-the-art methods and also generalize well to full-scale images.
|
This paper addresses the problem of remote sensing image pan-sharpening from the perspective of generative adversarial learning.
|
https://arxiv.org/abs/1805.03371v4
|
https://arxiv.org/pdf/1805.03371v4.pdf
| null |
[
"Qingjie Liu",
"Huanyu Zhou",
"Qizhi Xu",
"Xiangyu Liu",
"Yunhong Wang"
] |
[
"Generative Adversarial Network"
] | 2018-05-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-numerics-of-gans
|
1705.10461
| null | null |
The Numerics of GANs
|
In this paper, we analyze the numerics of common algorithms for training
Generative Adversarial Networks (GANs). Using the formalism of smooth
two-player games we analyze the associated gradient vector field of GAN
training objectives. Our findings suggest that the convergence of current
algorithms suffers due to two factors: i) presence of eigenvalues of the
Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues
with big imaginary part. Using these findings, we design a new algorithm that
overcomes some of these limitations and has better convergence properties.
Experimentally, we demonstrate its superiority on training common GAN
architectures and show convergence on GAN architectures that are known to be
notoriously hard to train.
|
In this paper, we analyze the numerics of common algorithms for training Generative Adversarial Networks (GANs).
|
http://arxiv.org/abs/1705.10461v3
|
http://arxiv.org/pdf/1705.10461v3.pdf
|
NeurIPS 2017 12
|
[
"Lars Mescheder",
"Sebastian Nowozin",
"Andreas Geiger"
] |
[] | 2017-05-30T00:00:00 |
http://papers.nips.cc/paper/6779-the-numerics-of-gans
|
http://papers.nips.cc/paper/6779-the-numerics-of-gans.pdf
|
the-numerics-of-gans-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/a-fast-and-easy-regression-technique-for-k-nn
|
1806.03945
| null | null |
A Fast and Easy Regression Technique for k-NN Classification Without Using Negative Pairs
|
This paper proposes an inexpensive way to learn an effective dissimilarity function to be used for $k$-nearest neighbor ($k$-NN) classification. Unlike Mahalanobis metric learning methods that map both query (unlabeled) objects and labeled objects to new coordinates by a single transformation, our method learns a transformation of labeled objects to new points in the feature space whereas query objects are kept in their original coordinates. This method has several advantages over existing distance metric learning methods: (i) In experiments with large document and image datasets, it achieves $k$-NN classification accuracy better than or at least comparable to the state-of-the-art metric learning methods. (ii) The transformation can be learned efficiently by solving a standard ridge regression problem. For document and image datasets, training is often more than two orders of magnitude faster than the fastest metric learning methods tested. This speed-up is also due to the fact that the proposed method eliminates the optimization over "negative" object pairs, i.e., objects whose class labels are different. (iii) The formulation has a theoretical justification in terms of reducing hubness in data.
| null |
https://arxiv.org/abs/1806.03945v2
|
https://arxiv.org/pdf/1806.03945v2.pdf
| null |
[
"Yutaro Shigeto",
"Masashi Shimbo",
"Yuji Matsumoto"
] |
[
"General Classification",
"Metric Learning",
"regression"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/image-denoising-with-generalized-gaussian
|
1802.01458
| null | null |
Image denoising with generalized Gaussian mixture model patch priors
|
Patch priors have become an important component of image restoration. A
powerful approach in this category of restoration algorithms is the popular
Expected Patch Log-Likelihood (EPLL) algorithm. EPLL uses a Gaussian mixture
model (GMM) prior learned on clean image patches as a way to regularize
degraded patches. In this paper, we show that a generalized Gaussian mixture
model (GGMM) captures the underlying distribution of patches better than a GMM.
Even though GGMM is a powerful prior to combine with EPLL, the non-Gaussianity
of its components presents major challenges to be applied to a computationally
intensive process of image restoration. Specifically, each patch has to undergo
a patch classification step and a shrinkage step. These two steps can be
efficiently solved with a GMM prior but are computationally impractical when
using a GGMM prior. In this paper, we provide approximations and computational
recipes for fast evaluation of these two steps, so that EPLL can embed a GGMM
prior on an image with more than tens of thousands of patches. Our main
contribution is to analyze the accuracy of our approximations based on thorough
theoretical analysis. Our evaluations indicate that the GGMM prior is
consistently a better fit formodeling image patch distribution and performs
better on average in image denoising task.
|
In this paper, we show that a generalized Gaussian mixture model (GGMM) captures the underlying distribution of patches better than a GMM.
|
http://arxiv.org/abs/1802.01458v2
|
http://arxiv.org/pdf/1802.01458v2.pdf
| null |
[
"Charles-Alban Deledalle",
"Shibin Parameswaran",
"Truong Q. Nguyen"
] |
[
"Denoising",
"Image Denoising",
"Image Restoration"
] | 2018-02-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/second-order-asymptotically-optimal
|
1806.00739
| null | null |
Second-Order Asymptotically Optimal Statistical Classification
|
Motivated by real-world machine learning applications, we analyze
approximations to the non-asymptotic fundamental limits of statistical
classification. In the binary version of this problem, given two training
sequences generated according to two {\em unknown} distributions $P_1$ and
$P_2$, one is tasked to classify a test sequence which is known to be generated
according to either $P_1$ or $P_2$. This problem can be thought of as an
analogue of the binary hypothesis testing problem but in the present setting,
the generating distributions are unknown. Due to finite sample considerations,
we consider the second-order asymptotics (or dispersion-type) tradeoff between
type-I and type-II error probabilities for tests which ensure that (i) the
type-I error probability for {\em all} pairs of distributions decays
exponentially fast and (ii) the type-II error probability for a {\em
particular} pair of distributions is non-vanishing. We generalize our results
to classification of multiple hypotheses with the rejection option.
| null |
http://arxiv.org/abs/1806.00739v3
|
http://arxiv.org/pdf/1806.00739v3.pdf
| null |
[
"Lin Zhou",
"Vincent Y. F. Tan",
"Mehul Motani"
] |
[
"Classification",
"General Classification",
"Two-sample testing",
"Vocal Bursts Type Prediction"
] | 2018-06-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/when-and-where-do-feed-forward-neural
|
1806.03934
| null |
HJXOfZ-AZ
|
When and where do feed-forward neural networks learn localist representations?
|
According to parallel distributed processing (PDP) theory in psychology,
neural networks (NN) learn distributed rather than interpretable localist
representations. This view has been held so strongly that few researchers have
analysed single units to determine if this assumption is correct. However,
recent results from psychology, neuroscience and computer science have shown
the occasional existence of local codes emerging in artificial and biological
neural networks. In this paper, we undertake the first systematic survey of
when local codes emerge in a feed-forward neural network, using generated input
and output data with known qualities. We find that the number of local codes
that emerge from a NN follows a well-defined distribution across the number of
hidden layer neurons, with a peak determined by the size of input data, number
of examples presented and the sparsity of input data. Using a 1-hot output code
drastically decreases the number of local codes on the hidden layer. The number
of emergent local codes increases with the percentage of dropout applied to the
hidden layer, suggesting that the localist encoding may offer a resilience to
noisy networks. This data suggests that localist coding can emerge from
feed-forward PDP networks and suggests some of the conditions that may lead to
interpretable localist representations in the cortex. The findings highlight
how local codes should not be dismissed out of hand.
| null |
http://arxiv.org/abs/1806.03934v1
|
http://arxiv.org/pdf/1806.03934v1.pdf
|
ICLR 2018 1
|
[
"Ella M. Gale",
"Nicolas Martin",
"Jeffrey S. Bowers"
] |
[] | 2018-06-11T00:00:00 |
https://openreview.net/forum?id=HJXOfZ-AZ
|
https://openreview.net/pdf?id=HJXOfZ-AZ
|
when-and-where-do-feed-forward-neural-1
| null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] |
https://paperswithcode.com/paper/adversarial-variational-bayes-unifying
|
1701.04722
| null | null |
Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
|
Variational Autoencoders (VAEs) are expressive latent variable models that
can be used to learn complex probability distributions from training data.
However, the quality of the resulting model crucially relies on the
expressiveness of the inference model. We introduce Adversarial Variational
Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily
expressive inference models. We achieve this by introducing an auxiliary
discriminative network that allows to rephrase the maximum-likelihood-problem
as a two-player game, hence establishing a principled connection between VAEs
and Generative Adversarial Networks (GANs). We show that in the nonparametric
limit our method yields an exact maximum-likelihood assignment for the
parameters of the generative model, as well as the exact posterior distribution
over the latent variables given an observation. Contrary to competing
approaches which combine VAEs with GANs, our approach has a clear theoretical
justification, retains most advantages of standard Variational Autoencoders and
is easy to implement.
|
We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation.
|
http://arxiv.org/abs/1701.04722v4
|
http://arxiv.org/pdf/1701.04722v4.pdf
|
ICML 2017 8
|
[
"Lars Mescheder",
"Sebastian Nowozin",
"Andreas Geiger"
] |
[] | 2017-01-17T00:00:00 |
https://icml.cc/Conferences/2017/Schedule?showEvent=671
|
http://proceedings.mlr.press/v70/mescheder17a/mescheder17a.pdf
|
adversarial-variational-bayes-unifying-1
| null |
[] |
https://paperswithcode.com/paper/kblrn-end-to-end-learning-of-knowledge-base
|
1709.04676
| null | null |
KBLRN : End-to-End Learning of Knowledge Base Representations with Latent, Relational, and Numerical Features
|
We present KBLRN, a framework for end-to-end learning of knowledge base
representations from latent, relational, and numerical features. KBLRN
integrates feature types with a novel combination of neural representation
learning and probabilistic product of experts models. To the best of our
knowledge, KBLRN is the first approach that learns representations of knowledge
bases by integrating latent, relational, and numerical features. We show that
instances of KBLRN outperform existing methods on a range of knowledge base
completion tasks. We contribute a novel data sets enriching commonly used
knowledge base completion benchmarks with numerical features. The data sets are
available under a permissive BSD-3 license. We also investigate the impact
numerical features have on the KB completion performance of KBLRN.
|
We present KBLRN, a framework for end-to-end learning of knowledge base representations from latent, relational, and numerical features.
|
http://arxiv.org/abs/1709.04676v3
|
http://arxiv.org/pdf/1709.04676v3.pdf
| null |
[
"Alberto Garcia-Duran",
"Mathias Niepert"
] |
[
"Knowledge Base Completion",
"Representation Learning"
] | 2017-09-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/3d-face-reconstruction-with-geometry-details
|
1702.05619
| null | null |
3D Face Reconstruction with Geometry Details from a Single Image
|
3D face reconstruction from a single image is a classical and challenging
problem, with wide applications in many areas. Inspired by recent works in face
animation from RGB-D or monocular video inputs, we develop a novel method for
reconstructing 3D faces from unconstrained 2D images, using a coarse-to-fine
optimization strategy. First, a smooth coarse 3D face is generated from an
example-based bilinear face model, by aligning the projection of 3D face
landmarks with 2D landmarks detected from the input image. Afterwards, using
local corrective deformation fields, the coarse 3D face is refined using
photometric consistency constraints, resulting in a medium face shape. Finally,
a shape-from-shading method is applied on the medium face to recover fine
geometric details. Our method outperforms state-of-the-art approaches in terms
of accuracy and detail recovery, which is demonstrated in extensive experiments
using real world models and publicly available datasets.
| null |
http://arxiv.org/abs/1702.05619v2
|
http://arxiv.org/pdf/1702.05619v2.pdf
| null |
[
"Luo Jiang",
"Juyong Zhang",
"Bailin Deng",
"Hao Li",
"Ligang Liu"
] |
[
"3D Face Reconstruction",
"Face Model",
"Face Reconstruction"
] | 2017-02-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gear-training-a-new-way-to-implement-high
|
1806.03925
| null | null |
Gear Training: A new way to implement high-performance model-parallel training
|
The training of Deep Neural Networks usually needs tremendous computing
resources. Therefore many deep models are trained in large cluster instead of
single machine or GPU. Though major researchs at present try to run whole model
on all machines by using asynchronous asynchronous stochastic gradient descent
(ASGD), we present a new approach to train deep model parallely -- split the
model and then seperately train different parts of it in different speed.
| null |
http://arxiv.org/abs/1806.03925v1
|
http://arxiv.org/pdf/1806.03925v1.pdf
| null |
[
"Hao Dong",
"Shuai Li",
"Dongchang Xu",
"Yi Ren",
"Di Zhang"
] |
[
"GPU"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/convergence-rates-for-projective-splitting
|
1806.03920
| null | null |
Convergence Rates for Projective Splitting
|
Projective splitting is a family of methods for solving inclusions involving
sums of maximal monotone operators. First introduced by Eckstein and Svaiter in
2008, these methods have enjoyed significant innovation in recent years,
becoming one of the most flexible operator splitting frameworks available.
While weak convergence of the iterates to a solution has been established,
there have been few attempts to study convergence rates of projective
splitting. The purpose of this paper is to do so under various assumptions. To
this end, there are three main contributions. First, in the context of convex
optimization, we establish an $O(1/k)$ ergodic function convergence rate.
Second, for strongly monotone inclusions, strong convergence is established as
well as an ergodic $O(1/\sqrt{k})$ convergence rate for the distance of the
iterates to the solution. Finally, for inclusions featuring strong monotonicity
and cocoercivity, linear convergence is established.
| null |
http://arxiv.org/abs/1806.03920v3
|
http://arxiv.org/pdf/1806.03920v3.pdf
| null |
[
"Patrick R. Johnstone",
"Jonathan Eckstein"
] |
[] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-classifiers-with-fenchel-young
|
1805.09717
| null | null |
Learning Classifiers with Fenchel-Young Losses: Generalized Entropies, Margins, and Algorithms
|
This paper studies Fenchel-Young losses, a generic way to construct convex
loss functions from a regularization function. We analyze their properties in
depth, showing that they unify many well-known loss functions and allow to
create useful new ones easily. Fenchel-Young losses constructed from a
generalized entropy, including the Shannon and Tsallis entropies, induce
predictive probability distributions. We formulate conditions for a generalized
entropy to yield losses with a separation margin, and probability distributions
with sparse support. Finally, we derive efficient algorithms, making
Fenchel-Young losses appealing both in theory and practice.
|
This paper studies Fenchel-Young losses, a generic way to construct convex loss functions from a regularization function.
|
http://arxiv.org/abs/1805.09717v4
|
http://arxiv.org/pdf/1805.09717v4.pdf
| null |
[
"Mathieu Blondel",
"André F. T. Martins",
"Vlad Niculae"
] |
[] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/low-rank-inducing-norms-with-optimality
|
1612.03186
| null | null |
Low-Rank Inducing Norms with Optimality Interpretations
|
Optimization problems with rank constraints appear in many diverse fields
such as control, machine learning and image analysis. Since the rank constraint
is non-convex, these problems are often approximately solved via convex
relaxations. Nuclear norm regularization is the prevailing convexifying
technique for dealing with these types of problem. This paper introduces a
family of low-rank inducing norms and regularizers which includes the nuclear
norm as a special case. A posteriori guarantees on solving an underlying rank
constrained optimization problem with these convex relaxations are provided. We
evaluate the performance of the low-rank inducing norms on three matrix
completion problems. In all examples, the nuclear norm heuristic is
outperformed by convex relaxations based on other low-rank inducing norms. For
two of the problems there exist low-rank inducing norms that succeed in
recovering the partially unknown matrix, while the nuclear norm fails. These
low-rank inducing norms are shown to be representable as semi-definite
programs. Moreover, these norms have cheaply computable proximal mappings,
which makes it possible to also solve problems of large size using first-order
methods.
|
A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided.
|
http://arxiv.org/abs/1612.03186v2
|
http://arxiv.org/pdf/1612.03186v2.pdf
| null |
[
"Christian Grussler",
"Pontus Giselsson"
] |
[
"Matrix Completion"
] | 2016-12-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/retinal-optic-disc-segmentation-using
|
1806.03905
| null | null |
Retinal Optic Disc Segmentation using Conditional Generative Adversarial Network
|
This paper proposed a retinal image segmentation method based on conditional
Generative Adversarial Network (cGAN) to segment optic disc. The proposed model
consists of two successive networks: generator and discriminator. The generator
learns to map information from the observing input (i.e., retinal fundus color
image), to the output (i.e., binary mask). Then, the discriminator learns as a
loss function to train this mapping by comparing the ground-truth and the
predicted output with observing the input image as a condition.Experiments were
performed on two publicly available dataset; DRISHTI GS1 and RIM-ONE. The
proposed model outperformed state-of-the-art-methods by achieving around 0.96%
and 0.98% of Jaccard and Dice coefficients, respectively. Moreover, an image
segmentation is performed in less than a second on recent GPU.
| null |
http://arxiv.org/abs/1806.03905v1
|
http://arxiv.org/pdf/1806.03905v1.pdf
| null |
[
"Vivek Kumar Singh",
"Hatem Rashwan",
"Farhan Akram",
"Nidhi Pandey",
"Md. Mostaf Kamal Sarker",
"Adel Saleh",
"Saddam Abdulwahab",
"Najlaa Maaroof",
"Santiago Romani",
"Domenec Puig"
] |
[
"Generative Adversarial Network",
"GPU",
"Image Segmentation",
"Optic Disc Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-task-learning-of-daily-work-and-study
|
1806.03903
| null | null |
Multi-task learning of daily work and study round-trips from survey data
|
In this study, we present a machine learning approach to infer the worker and
student mobility flows on daily basis from static censuses. The rapid
urbanization has made the estimation of the human mobility flows a critical
task for transportation and urban planners. The primary objective of this paper
is to complete individuals' census data with working and studying trips,
allowing its merging with other mobility data to better estimate the complete
origin-destination matrices. Worker and student mobility flows are among the
most weekly regular displacements and consequently generate road congestion
problems. Estimating their round-trips eases the decision-making processes for
local authorities. Worker and student censuses often contain home location,
work places and educational institutions. We thus propose a neural network
model that learns the temporal distribution of displacements from other
mobility sources and tries to predict them on new censuses data. The inclusion
of multi-task learning in our neural network results in a significant error
rate control in comparison to single task learning.
| null |
http://arxiv.org/abs/1806.03903v1
|
http://arxiv.org/pdf/1806.03903v1.pdf
| null |
[
"Mehdi Katranji",
"Sami Kraiem",
"Laurent Moalic",
"Guilhem Sanmarty",
"Alexandre Caminada",
"Fouad Hadj Selem"
] |
[
"Decision Making",
"Multi-Task Learning"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dual-pattern-learning-networks-by-empirical
|
1806.03902
| null | null |
Dual Pattern Learning Networks by Empirical Dual Prediction Risk Minimization
|
Motivated by the observation that humans can learn patterns from two given
images at one time, we propose a dual pattern learning network architecture in
this paper. Unlike conventional networks, the proposed architecture has two
input branches and two loss functions. Instead of minimizing the empirical risk
of a given dataset, dual pattern learning networks is trained by minimizing the
empirical dual prediction loss. We show that this can improve the performance
for single image classification. This architecture forces the network to learn
discriminative class-specific features by analyzing and comparing two input
images. In addition, the dual input structure allows the network to have a
considerably large number of image pairs, which can help address the
overfitting issue due to limited training data. Moreover, we propose to
associate each input branch with a random interest value for learning
corresponding image during training. This method can be seen as a stochastic
regularization technique, and can further lead to generalization performance
improvement. State-of-the-art deep networks can be adapted to dual pattern
learning networks without increasing the same number of parameters. Extensive
experiments on CIFAR-10, CIFAR- 100, FI-8, Google commands dataset, and MNIST
demonstrate that our DPLNets exhibit better performance than original networks.
The experimental results on subsets of CIFAR- 10, CIFAR-100, and MNIST
demonstrate that dual pattern learning networks have good generalization
performance on small datasets.
| null |
http://arxiv.org/abs/1806.03902v1
|
http://arxiv.org/pdf/1806.03902v1.pdf
| null |
[
"Haimin Zhang",
"Min Xu"
] |
[
"image-classification",
"Image Classification"
] | 2018-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/natasha-2-faster-non-convex-optimization-than
|
1708.08694
| null | null |
Natasha 2: Faster Non-Convex Optimization Than SGD
|
We design a stochastic algorithm to train any smooth neural network to
$\varepsilon$-approximate local minima, using $O(\varepsilon^{-3.25})$
backpropagations. The best result was essentially $O(\varepsilon^{-4})$ by SGD.
More broadly, it finds $\varepsilon$-approximate local minima of any smooth
nonconvex function in rate $O(\varepsilon^{-3.25})$, with only oracle access to
stochastic gradients.
| null |
http://arxiv.org/abs/1708.08694v4
|
http://arxiv.org/pdf/1708.08694v4.pdf
|
NeurIPS 2018 12
|
[
"Zeyuan Allen-Zhu"
] |
[] | 2017-08-29T00:00:00 |
http://papers.nips.cc/paper/7533-natasha-2-faster-non-convex-optimization-than-sgd
|
http://papers.nips.cc/paper/7533-natasha-2-faster-non-convex-optimization-than-sgd.pdf
|
natasha-2-faster-non-convex-optimization-than-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/large-scale-bisample-learning-on-id-versus
|
1806.03018
| null | null |
Large-scale Bisample Learning on ID Versus Spot Face Recognition
|
In real-world face recognition applications, there is a tremendous amount of
data with two images for each person. One is an ID photo for face enrollment,
and the other is a probe photo captured on spot. Most existing methods are
designed for training data with limited breadth (a relatively small number of
classes) and sufficient depth (many samples for each class). They would meet
great challenges on ID versus Spot (IvS) data, including the under-represented
intra-class variations and an excessive demand on computing devices. In this
paper, we propose a deep learning based large-scale bisample learning (LBL)
method for IvS face recognition. To tackle the bisample problem with only two
samples for each class, a classification-verification-classification (CVC)
training strategy is proposed to progressively enhance the IvS performance.
Besides, a dominant prototype softmax (DP-softmax) is incorporated to make the
deep learning scalable on large-scale classes. We conduct LBL on a IvS face
dataset with more than two million identities. Experimental results show the
proposed method achieves superior performance to previous ones, validating the
effectiveness of LBL on IvS face recognition.
| null |
http://arxiv.org/abs/1806.03018v3
|
http://arxiv.org/pdf/1806.03018v3.pdf
| null |
[
"Xiangyu Zhu",
"Hao liu",
"Zhen Lei",
"Hailin Shi",
"Fan Yang",
"Dong Yi",
"Guo-Jun Qi",
"Stan Z. Li"
] |
[
"Face Recognition",
"General Classification"
] | 2018-06-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.