paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/using-transfer-learning-to-detect-galaxy
|
1805.10289
| null | null |
Using transfer learning to detect galaxy mergers
|
We investigate the use of deep convolutional neural networks (deep CNNs) for
automatic visual detection of galaxy mergers. Moreover, we investigate the use
of transfer learning in conjunction with CNNs, by retraining networks first
trained on pictures of everyday objects. We test the hypothesis that transfer
learning is useful for improving classification performance for small training
sets. This would make transfer learning useful for finding rare objects in
astronomical imaging datasets. We find that these deep learning methods perform
significantly better than current state-of-the-art merger detection methods
based on nonparametric systems like CAS and GM$_{20}$. Our method is end-to-end
and robust to image noise and distortions; it can be applied directly without
image preprocessing. We also find that transfer learning can act as a
regulariser in some cases, leading to better overall classification accuracy
($p = 0.02$). Transfer learning on our full training set leads to a lowered
error rate from 0.038 $\pm$ 1 down to 0.032 $\pm$ 1, a relative improvement of
15%. Finally, we perform a basic sanity-check by creating a merger sample with
our method, and comparing with an already existing, manually created merger
catalogue in terms of colour-mass distribution and stellar mass function.
| null |
http://arxiv.org/abs/1805.10289v2
|
http://arxiv.org/pdf/1805.10289v2.pdf
| null |
[
"Sandro Ackermann",
"Kevin Schawinski",
"Ce Zhang",
"Anna K. Weigel",
"M. Dennis Turp"
] |
[
"General Classification",
"Transfer Learning"
] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automating-personnel-rostering-by-learning
|
1805.11375
| null | null |
Automating Personnel Rostering by Learning Constraints Using Tensors
|
Many problems in operations research require that constraints be specified in
the model. Determining the right constraints is a hard and laborsome task. We
propose an approach to automate this process using artificial intelligence and
machine learning principles. So far there has been only little work on learning
constraints within the operations research community. We focus on personnel
rostering and scheduling problems in which there are often past schedules
available and show that it is possible to automatically learn constraints from
such examples. To realize this, we adapted some techniques from the constraint
programming community and we have extended them in order to cope with
multidimensional examples. The method uses a tensor representation of the
example, which helps in capturing the dimensionality as well as the structure
of the example, and applies tensor operations to find the constraints that are
satisfied by the example. To evaluate the proposed algorithm, we used
constraints from the Nurse Rostering Competition and generated solutions that
satisfy these constraints; these solutions were then used as examples to learn
constraints. Experiments demonstrate that the proposed algorithm is capable of
producing human readable constraints that capture the underlying
characteristics of the examples.
| null |
http://arxiv.org/abs/1805.11375v1
|
http://arxiv.org/pdf/1805.11375v1.pdf
| null |
[
"Mohit Kumar",
"Stefano Teso",
"Luc De Raedt"
] |
[
"Scheduling"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/webpage-saliency-prediction-with-two-stage
|
1805.11374
| null | null |
Webpage Saliency Prediction with Two-stage Generative Adversarial Networks
|
Web page saliency prediction is a challenge problem in image transformation
and computer vision. In this paper, we propose a new model combined with web
page outline information to prediction people's interest region in web page.
For each web page image, our model can generate the saliency map which
indicates the region of interest for people. A two-stage generative adversarial
networks are proposed and image outline information is introduced for better
transferring. Experiment results on FIWI dataset show that our model have
better performance in terms of saliency prediction.
| null |
http://arxiv.org/abs/1805.11374v1
|
http://arxiv.org/pdf/1805.11374v1.pdf
| null |
[
"Yu Li",
"Ya zhang"
] |
[
"Prediction",
"Saliency Prediction",
"Vocal Bursts Valence Prediction"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-to-rate-a-video-game-a-prediction-system
|
1805.11372
| null | null |
"How to rate a video game?" - A prediction system for video games based on multimodal information
|
Video games have become an integral part of most people's lives in recent
times. This led to an abundance of data related to video games being shared
online. However, this comes with issues such as incorrect ratings, reviews or
anything that is being shared. Recommendation systems are powerful tools that
help users by providing them with meaningful recommendations. A straightforward
approach would be to predict the scores of video games based on other
information related to the game. It could be used as a means to validate
user-submitted ratings as well as provide recommendations. This work provides a
method to predict the G-Score, that defines how good a video game is, from its
trailer (video) and summary (text). We first propose models to predict the
G-Score based on the trailer alone (unimodal). Later on, we show that
considering information from multiple modalities helps the models perform
better compared to using information from videos alone. Since we couldn't find
any suitable multimodal video game dataset, we created our own dataset named
VGD (Video Game Dataset) and provide it along with this work. The approach
mentioned here can be generalized to other multimodal datasets such as movie
trailers and summaries etc. Towards the end, we talk about the shortcomings of
the work and some methods to overcome them.
| null |
http://arxiv.org/abs/1805.11372v1
|
http://arxiv.org/pdf/1805.11372v1.pdf
| null |
[
"Vishal Batchu",
"Varshit Battu",
"Murali Krishna Reddy",
"Radhika Mamidi"
] |
[
"Recommendation Systems"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-to-blend-a-robot-within-a-group-of
|
1805.11371
| null | null |
How to Blend a Robot within a Group of Zebrafish: Achieving Social Acceptance through Real-time Calibration of a Multi-level Behavioural Model
|
We have previously shown how to socially integrate a fish robot into a group
of zebrafish thanks to biomimetic behavioural models. The models have to be
calibrated on experimental data to present correct behavioural features. This
calibration is essential to enhance the social integration of the robot into
the group. When calibrated, the behavioural model of fish behaviour is
implemented to drive a robot with closed-loop control of social interactions
into a group of zebrafish. This approach can be useful to form mixed-groups,
and study animal individual and collective behaviour by using biomimetic
autonomous robots capable of responding to the animals in long-standing
experiments. Here, we show a methodology for continuous real-time calibration
and refinement of multi-level behavioural model. The real-time calibration, by
an evolutionary algorithm, is based on simulation of the model to correspond to
the observed fish behaviour in real-time. The calibrated model is updated on
the robot and tested during the experiments. This method allows to cope with
changes of dynamics in fish behaviour. Moreover, each fish presents individual
behavioural differences. Thus, each trial is done with naive fish groups that
display behavioural variability. This real-time calibration methodology can
optimise the robot behaviours during the experiments. Our implementation of
this methodology runs on three different computers that perform individual
tracking, data-analysis, multi-objective evolutionary algorithms, simulation of
the fish robot and adaptation of the robot behavioural models, all in
real-time.
| null |
http://arxiv.org/abs/1805.11371v1
|
http://arxiv.org/pdf/1805.11371v1.pdf
| null |
[
"Leo Cazenille",
"Yohann Chemtob",
"Frank Bonnet",
"Alexey Gribovskiy",
"Francesco Mondada",
"Nicolas Bredeche",
"Jose Halloy"
] |
[
"Evolutionary Algorithms"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lovasz-convolutional-networks
|
1805.11365
| null | null |
Lovasz Convolutional Networks
|
Semi-supervised learning on graph structured data has received significant
attention with the recent introduction of Graph Convolution Networks (GCN).
While traditional methods have focused on optimizing a loss augmented with
Laplacian regularization framework, GCNs perform an implicit Laplacian type
regularization to capture local graph structure. In this work, we propose
Lovasz Convolutional Network (LCNs) which are capable of incorporating global
graph properties. LCNs achieve this by utilizing Lovasz's orthonormal
embeddings of the nodes. We analyse local and global properties of graphs and
demonstrate settings where LCNs tend to work better than GCNs. We validate the
proposed method on standard random graph models such as stochastic block models
(SBM) and certain community structure based graphs where LCNs outperform GCNs
and learn more intuitive embeddings. We also perform extensive binary and
multi-class classification experiments on real world datasets to demonstrate
LCN's effectiveness. In addition to simple graphs, we also demonstrate the use
of LCNs on hyper-graphs by identifying settings where they are expected to work
better than GCNs.
|
We analyse local and global properties of graphs and demonstrate settings where LCNs tend to work better than GCNs.
|
http://arxiv.org/abs/1805.11365v3
|
http://arxiv.org/pdf/1805.11365v3.pdf
| null |
[
"Prateek Yadav",
"Madhav Nimishakavi",
"Naganand Yadati",
"Shikhar Vashishth",
"Arun Rajkumar",
"Partha Talukdar"
] |
[
"Multi-class Classification"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/regularized-optimal-transport-and-the-rot
|
1610.06447
| null | null |
Regularized Optimal Transport and the Rot Mover's Distance
|
This paper presents a unified framework for smooth convex regularization of
discrete optimal transport problems. In this context, the regularized optimal
transport turns out to be equivalent to a matrix nearness problem with respect
to Bregman divergences. Our framework thus naturally generalizes a previously
proposed regularization based on the Boltzmann-Shannon entropy related to the
Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We
call the regularized optimal transport distance the rot mover's distance in
reference to the classical earth mover's distance. We develop two generic
schemes that we respectively call the alternate scaling algorithm and the
non-negative alternate scaling algorithm, to compute efficiently the
regularized optimal plans depending on whether the domain of the regularizer
lies within the non-negative orthant or not. These schemes are based on
Dykstra's algorithm with alternate Bregman projections, and further exploit the
Newton-Raphson method when applied to separable divergences. We enhance the
separable case with a sparse extension to deal with high data dimensions. We
also instantiate our proposed framework and discuss the inherent specificities
for well-known regularizers and statistical divergences in the machine learning
and information geometry communities. Finally, we demonstrate the merits of our
methods with experiments using synthetic data to illustrate the effect of
different regularizers and penalties on the solutions, as well as real-world
data for a pattern recognition application to audio scene classification.
| null |
http://arxiv.org/abs/1610.06447v4
|
http://arxiv.org/pdf/1610.06447v4.pdf
| null |
[
"Arnaud Dessein",
"Nicolas Papadakis",
"Jean-Luc Rouas"
] |
[
"Scene Classification"
] | 2016-10-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semantic-sentence-matching-with-densely
|
1805.11360
| null | null |
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
|
Sentence matching is widely used in various natural language tasks such as
natural language inference, paraphrase identification, and question answering.
For these tasks, understanding logical and semantic relationship between two
sentences is required but it is yet challenging. Although attention mechanism
is useful to capture the semantic relationship and to properly align the
elements of two sentences, previous methods of attention mechanism simply use a
summation operation which does not retain original features enough. Inspired by
DenseNet, a densely connected convolutional network, we propose a
densely-connected co-attentive recurrent neural network, each layer of which
uses concatenated information of attentive features as well as hidden features
of all the preceding recurrent layers. It enables preserving the original and
the co-attentive feature information from the bottommost word embedding layer
to the uppermost recurrent layer. To alleviate the problem of an
ever-increasing size of feature vectors due to dense concatenation operations,
we also propose to use an autoencoder after dense concatenation. We evaluate
our proposed architecture on highly competitive benchmark datasets related to
sentence matching. Experimental results show that our architecture, which
retains recurrent and attentive features, achieves state-of-the-art
performances for most of the tasks.
| null |
http://arxiv.org/abs/1805.11360v2
|
http://arxiv.org/pdf/1805.11360v2.pdf
| null |
[
"Seonhoon Kim",
"Inho Kang",
"Nojun Kwak"
] |
[
"Natural Language Inference",
"Paraphrase Identification",
"Question Answering",
"Sentence"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/coconet-a-deep-neural-network-for-mapping
|
1805.11357
| null | null |
CocoNet: A deep neural network for mapping pixel coordinates to color values
|
In this paper, we propose a deep neural network approach for mapping the 2D
pixel coordinates in an image to the corresponding Red-Green-Blue (RGB) color
values. The neural network is termed CocoNet, i.e. coordinates-to-color
network. During the training process, the neural network learns to encode the
input image within its layers. More specifically, the network learns a
continuous function that approximates the discrete RGB values sampled over the
discrete 2D pixel locations. At test time, given a 2D pixel coordinate, the
neural network will output the approximate RGB values of the corresponding
pixel. By considering every 2D pixel location, the network can actually
reconstruct the entire learned image. It is important to note that we have to
train an individual neural network for each input image, i.e. one network
encodes a single image only. To the best of our knowledge, we are the first to
propose a neural approach for encoding images individually, by learning a
mapping from the 2D pixel coordinate space to the RGB color space. Our neural
image encoding approach has various low-level image processing applications
ranging from image encoding, image compression and image denoising to image
resampling and image completion. We conduct experiments that include both
quantitative and qualitative results, demonstrating the utility of our approach
and its superiority over standard baselines, e.g. bilateral filtering or
bicubic interpolation. Our code is available at
https://github.com/paubric/python-fuse-coconet.
|
It is important to note that we have to train an individual neural network for each input image, i. e. one network encodes a single image only.
|
http://arxiv.org/abs/1805.11357v3
|
http://arxiv.org/pdf/1805.11357v3.pdf
| null |
[
"Paul Andrei Bricman",
"Radu Tudor Ionescu"
] |
[
"Denoising",
"Image Compression",
"Image Denoising"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-universal-framework-for-learning-based-on
|
1805.08045
| null | null |
A universal framework for learning the elliptical mixture model
|
Mixture modelling using elliptical distributions promises enhanced robustness, flexibility and stability over the widely employed Gaussian mixture model (GMM). However, existing studies based on the elliptical mixture model (EMM) are restricted to several specific types of elliptical probability density functions, which are not supported by general solutions or systematic analysis frameworks; this significantly limits the rigour and the power of EMMs in applications. To this end, we propose a novel general framework for estimating and analysing the EMMs, achieved through Riemannian manifold optimisation. First, we investigate the relationships between Riemannian manifolds and elliptical distributions, and the so established connection between the original manifold and a reformulated one indicates a mismatch between those manifolds, the major cause of failure of the existing optimisation for solving general EMMs. We next propose a universal solver which is based on the optimisation of a re-designed cost and prove the existence of the same optimum as in the original problem; this is achieved in a simple, fast and stable way. We further calculate the influence functions of the EMM as theoretical bounds to quantify robustness to outliers. Comprehensive numerical results demonstrate the ability of the proposed framework to accommodate EMMs with different properties of individual functions in a stable way and with fast convergence speed. Finally, the enhanced robustness and flexibility of the proposed framework over the standard GMM are demonstrated both analytically and through comprehensive simulations.
| null |
https://arxiv.org/abs/1805.08045v5
|
https://arxiv.org/pdf/1805.08045v5.pdf
| null |
[
"Shengxi Li",
"Zeyang Yu",
"Danilo Mandic"
] |
[] | 2018-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/quantum-inspired-complex-word-embedding
|
1805.11351
| null | null |
Quantum-inspired Complex Word Embedding
|
A challenging task for word embeddings is to capture the emergent meaning or
polarity of a combination of individual words. For example, existing approaches
in word embeddings will assign high probabilities to the words "Penguin" and
"Fly" if they frequently co-occur, but it fails to capture the fact that they
occur in an opposite sense - Penguins do not fly. We hypothesize that humans do
not associate a single polarity or sentiment to each word. The word contributes
to the overall polarity of a combination of words depending upon which other
words it is combined with. This is analogous to the behavior of microscopic
particles which exist in all possible states at the same time and interfere
with each other to give rise to new states depending upon their relative
phases. We make use of the Hilbert Space representation of such particles in
Quantum Mechanics where we subscribe a relative phase to each word, which is a
complex number, and investigate two such quantum inspired models to derive the
meaning of a combination of words. The proposed models achieve better
performances than state-of-the-art non-quantum models on the binary sentence
classification task.
| null |
http://arxiv.org/abs/1805.11351v1
|
http://arxiv.org/pdf/1805.11351v1.pdf
|
WS 2018 7
|
[
"Qiuchi Li",
"Sagar Uprety",
"Benyou Wang",
"Dawei Song"
] |
[
"Sentence",
"Sentence Classification",
"Word Embeddings"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/W18-3006
|
https://aclanthology.org/W18-3006.pdf
|
quantum-inspired-complex-word-embedding-1
| null |
[] |
https://paperswithcode.com/paper/higher-order-relation-schema-induction-using
|
1707.01917
| null | null |
Higher-order Relation Schema Induction using Tensor Factorization with Back-off and Aggregation
|
Relation Schema Induction (RSI) is the problem of identifying type signatures
of arguments of relations from unlabeled text. Most of the previous work in
this area have focused only on binary RSI, i.e., inducing only the subject and
object type signatures per relation. However, in practice, many relations are
high-order, i.e., they have more than two arguments and inducing type
signatures of all arguments is necessary. For example, in the sports domain,
inducing a schema win(WinningPlayer, OpponentPlayer, Tournament, Location) is
more informative than inducing just win(WinningPlayer, OpponentPlayer). We
refer to this problem as Higher-order Relation Schema Induction (HRSI). In this
paper, we propose Tensor Factorization with Back-off and Aggregation (TFBA), a
novel framework for the HRSI problem. To the best of our knowledge, this is the
first attempt at inducing higher-order relation schemata from unlabeled text.
Using the experimental analysis on three real world datasets, we show how TFBA
helps in dealing with sparsity and induce higher order schemata.
|
Relation Schema Induction (RSI) is the problem of identifying type signatures of arguments of relations from unlabeled text.
|
http://arxiv.org/abs/1707.01917v2
|
http://arxiv.org/pdf/1707.01917v2.pdf
|
ACL 2018 7
|
[
"Madhav Nimishakavi",
"Partha Talukdar"
] |
[
"Relation"
] | 2017-07-06T00:00:00 |
https://aclanthology.org/P18-1146
|
https://aclanthology.org/P18-1146.pdf
|
higher-order-relation-schema-induction-using-1
| null |
[] |
https://paperswithcode.com/paper/fully-statistical-neural-belief-tracking
|
1805.11350
| null | null |
Fully Statistical Neural Belief Tracking
|
This paper proposes an improvement to the existing data-driven Neural Belief
Tracking (NBT) framework for Dialogue State Tracking (DST). The existing NBT
model uses a hand-crafted belief state update mechanism which involves an
expensive manual retuning step whenever the model is deployed to a new dialogue
domain. We show that this update mechanism can be learned jointly with the
semantic decoding and context modelling parts of the NBT model, eliminating the
last rule-based module from this DST framework. We propose two different
statistical update mechanisms and show that dialogue dynamics can be modelled
with a very small number of additional model parameters. In our DST evaluation
over three languages, we show that this model achieves competitive performance
and provides a robust framework for building resource-light DST models.
|
This paper proposes an improvement to the existing data-driven Neural Belief Tracking (NBT) framework for Dialogue State Tracking (DST).
|
http://arxiv.org/abs/1805.11350v1
|
http://arxiv.org/pdf/1805.11350v1.pdf
| null |
[
"Nikola Mrkšić",
"Ivan Vulić"
] |
[
"Dialogue State Tracking"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic sparse training methods train neural networks in a sparse manner, starting with an initial sparse mask, and periodically updating the mask based on some criteria.",
"full_name": "Dynamic Sparse Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "DST",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/uncertainty-gated-network-for-land-cover
|
1805.11348
| null | null |
Uncertainty Gated Network for Land Cover Segmentation
|
The production of thematic maps depicting land cover is one of the most
common applications of remote sensing. To this end, several semantic
segmentation approaches, based on deep learning, have been proposed in the
literature, but land cover segmentation is still considered an open problem due
to some specific problems related to remote sensing imaging. In this paper we
propose a novel approach to deal with the problem of modelling multiscale
contexts surrounding pixels of different land cover categories. The approach
leverages the computation of a heteroscedastic measure of uncertainty when
classifying individual pixels in an image. This classification uncertainty
measure is used to define a set of memory gates between layers that allow a
principled method to select the optimal decision for each pixel.
| null |
http://arxiv.org/abs/1805.11348v1
|
http://arxiv.org/pdf/1805.11348v1.pdf
| null |
[
"Guillem Pascual",
"Santi Seguí",
"Jordi Vitrià"
] |
[
"General Classification",
"Segmentation",
"Semantic Segmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pointly-supervised-action-localization
|
1805.11333
| null | null |
Pointly-Supervised Action Localization
|
This paper strives for spatio-temporal localization of human actions in
videos. In the literature, the consensus is to achieve localization by training
on bounding box annotations provided for each frame of each training video. As
annotating boxes in video is expensive, cumbersome and error-prone, we propose
to bypass box-supervision. Instead, we introduce action localization based on
point-supervision. We start from unsupervised spatio-temporal proposals, which
provide a set of candidate regions in videos. While normally used exclusively
for inference, we show spatio-temporal proposals can also be leveraged during
training when guided by a sparse set of point annotations. We introduce an
overlap measure between points and spatio-temporal proposals and incorporate
them all into a new objective of a Multiple Instance Learning optimization.
During inference, we introduce pseudo-points, visual cues from videos, that
automatically guide the selection of spatio-temporal proposals. We outline five
spatial and one temporal pseudo-point, as well as a measure to best leverage
pseudo-points at test time. Experimental evaluation on three action
localization datasets shows our pointly-supervised approach (i) is as effective
as traditional box-supervision at a fraction of the annotation cost, (ii) is
robust to sparse and noisy point annotations, (iii) benefits from pseudo-points
during inference, and (iv) outperforms recent weakly-supervised alternatives.
This leads us to conclude that points provide a viable alternative to boxes for
action localization.
| null |
http://arxiv.org/abs/1805.11333v2
|
http://arxiv.org/pdf/1805.11333v2.pdf
| null |
[
"Pascal Mettes",
"Cees G. M. Snoek"
] |
[
"Action Localization",
"Multiple Instance Learning",
"Temporal Localization"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/training-verified-learners-with-learned
|
1805.10265
| null | null |
Training verified learners with learned verifiers
|
This paper proposes a new algorithmic framework, predictor-verifier training,
to train neural networks that are verifiable, i.e., networks that provably
satisfy some desired input-output properties. The key idea is to simultaneously
train two networks: a predictor network that performs the task at hand,e.g.,
predicting labels given inputs, and a verifier network that computes a bound on
how well the predictor satisfies the properties being verified. Both networks
can be trained simultaneously to optimize a weighted combination of the
standard data-fitting loss and a term that bounds the maximum violation of the
property. Experiments show that not only is the predictor-verifier architecture
able to train networks to achieve state of the art verified robustness to
adversarial examples with much shorter training times (outperforming previous
algorithms on small datasets like MNIST and SVHN), but it can also be scaled to
produce the first known (to the best of our knowledge) verifiably robust
networks for CIFAR-10.
| null |
http://arxiv.org/abs/1805.10265v2
|
http://arxiv.org/pdf/1805.10265v2.pdf
| null |
[
"Krishnamurthy Dvijotham",
"Sven Gowal",
"Robert Stanforth",
"Relja Arandjelovic",
"Brendan O'Donoghue",
"Jonathan Uesato",
"Pushmeet Kohli"
] |
[] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hamiltonian-variational-auto-encoder
|
1805.11328
| null | null |
Hamiltonian Variational Auto-Encoder
|
Variational Auto-Encoders (VAEs) have become very popular techniques to
perform inference and learning in latent variable models as they allow us to
leverage the rich representational power of neural networks to obtain flexible
approximations of the posterior of latent variables as well as tight evidence
lower bounds (ELBOs). Combined with stochastic variational inference, this
provides a methodology scaling to large datasets. However, for this methodology
to be practically efficient, it is necessary to obtain low-variance unbiased
estimators of the ELBO and its gradients with respect to the parameters of
interest. While the use of Markov chain Monte Carlo (MCMC) techniques such as
Hamiltonian Monte Carlo (HMC) has been previously suggested to achieve this
[23, 26], the proposed methods require specifying reverse kernels which have a
large impact on performance. Additionally, the resulting unbiased estimator of
the ELBO for most MCMC kernels is typically not amenable to the
reparameterization trick. We show here how to optimally select reverse kernels
in this setting and, by building upon Hamiltonian Importance Sampling (HIS)
[17], we obtain a scheme that provides low-variance unbiased estimators of the
ELBO and its gradients using the reparameterization trick. This allows us to
develop a Hamiltonian Variational Auto-Encoder (HVAE). This method can be
reinterpreted as a target-informed normalizing flow [20] which, within our
context, only requires a few evaluations of the gradient of the sampled
likelihood and trivial Jacobian calculations at each iteration.
|
However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest.
|
http://arxiv.org/abs/1805.11328v2
|
http://arxiv.org/pdf/1805.11328v2.pdf
|
NeurIPS 2018 12
|
[
"Anthony L. Caterini",
"Arnaud Doucet",
"Dino Sejdinovic"
] |
[
"Variational Inference"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8039-hamiltonian-variational-auto-encoder
|
http://papers.nips.cc/paper/8039-hamiltonian-variational-auto-encoder.pdf
|
hamiltonian-variational-auto-encoder-1
| null |
[] |
https://paperswithcode.com/paper/lightweight-probabilistic-deep-networks
|
1805.11327
| null | null |
Lightweight Probabilistic Deep Networks
|
Even though probabilistic treatments of neural networks have a long history,
they have not found widespread use in practice. Sampling approaches are often
too slow already for simple networks. The size of the inputs and the depth of
typical CNN architectures in computer vision only compound this problem.
Uncertainty in neural networks has thus been largely ignored in practice,
despite the fact that it may provide important information about the
reliability of predictions and the inner workings of the network. In this
paper, we introduce two lightweight approaches to making supervised learning
with probabilistic deep networks practical: First, we suggest probabilistic
output layers for classification and regression that require only minimal
changes to existing networks. Second, we employ assumed density filtering and
show that activation uncertainties can be propagated in a practical fashion
through the entire network, again with minor changes. Both probabilistic
networks retain the predictive power of the deterministic counterpart, but
yield uncertainties that correlate well with the empirical error induced by
their predictions. Moreover, the robustness to adversarial examples is
significantly increased.
|
Even though probabilistic treatments of neural networks have a long history, they have not found widespread use in practice.
|
http://arxiv.org/abs/1805.11327v1
|
http://arxiv.org/pdf/1805.11327v1.pdf
|
CVPR 2018 6
|
[
"Jochen Gast",
"Stefan Roth"
] |
[] | 2018-05-29T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Gast_Lightweight_Probabilistic_Deep_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Gast_Lightweight_Probabilistic_Deep_CVPR_2018_paper.pdf
|
lightweight-probabilistic-deep-networks-1
| null |
[] |
https://paperswithcode.com/paper/long-short-term-memory-networks-for-csi300
|
1805.11954
| null | null |
Long Short-Term Memory Networks for CSI300 Volatility Prediction with Baidu Search Volume
|
Intense volatility in financial markets affect humans worldwide. Therefore,
relatively accurate prediction of volatility is critical. We suggest that
massive data sources resulting from human interaction with the Internet may
offer a new perspective on the behavior of market participants in periods of
large market movements. First we select 28 key words, which are related to
finance as indicators of the public mood and macroeconomic factors. Then those
28 words of the daily search volume based on Baidu index are collected
manually, from June 1, 2006 to October 29, 2017. We apply a Long Short-Term
Memory neural network to forecast CSI300 volatility using those search volume
data. Compared to the benchmark GARCH model, our forecast is more accurate,
which demonstrates the effectiveness of the LSTM neural network in volatility
forecasting.
| null |
http://arxiv.org/abs/1805.11954v1
|
http://arxiv.org/pdf/1805.11954v1.pdf
| null |
[
"Yu-Long Zhou",
"Ren-Jie Han",
"Qian Xu",
"Wei-Ke Zhang"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/peekaboo-where-are-the-objects-structure
|
1802.02796
| null | null |
Peekaboo - Where are the Objects? Structure Adjusting Superpixels
|
This paper addresses the search for a fast and meaningful image segmentation
in the context of $k$-means clustering. The proposed method builds on a
widely-used local version of Lloyd's algorithm, called Simple Linear Iterative
Clustering (SLIC). We propose an algorithm which extends SLIC to dynamically
adjust the local search, adopting superpixel resolution dynamically to
structure existent in the image, and thus provides for more meaningful
superpixels in the same linear runtime as standard SLIC. The proposed method is
evaluated against state-of-the-art techniques and improved boundary adherence
and undersegmentation error are observed, whilst still remaining among the
fastest algorithms which are tested.
| null |
http://arxiv.org/abs/1802.02796v2
|
http://arxiv.org/pdf/1802.02796v2.pdf
| null |
[
"Georg Maierhofer",
"Daniel Heydecker",
"Angelica I. Aviles-Rivero",
"Samar M. Alsaleh",
"Carola-Bibiane Schönlieb"
] |
[
"Clustering",
"Image Segmentation",
"Semantic Segmentation",
"Superpixels"
] | 2018-02-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/greedy-structure-learning-of-hierarchical
|
1701.06171
| null | null |
Greedy Structure Learning of Hierarchical Compositional Models
|
In this work, we consider the problem of learning a hierarchical generative
model of an object from a set of images which show examples of the object in
the presence of variable background clutter. Existing approaches to this
problem are limited by making strong a-priori assumptions about the object's
geometric structure and require segmented training data for learning. In this
paper, we propose a novel framework for learning hierarchical compositional
models (HCMs) which do not suffer from the mentioned limitations. We present a
generalized formulation of HCMs and describe a greedy structure learning
framework that consists of two phases: Bottom-up part learning and top-down
model composition. Our framework integrates the foreground-background
segmentation problem into the structure learning task via a background model.
As a result, we can jointly optimize for the number of layers in the hierarchy,
the number of parts per layer and a foreground-background segmentation based on
class labels only. We show that the learned HCMs are semantically meaningful
and achieve competitive results when compared to other generative object models
at object classification on a standard transfer learning dataset.
| null |
http://arxiv.org/abs/1701.06171v4
|
http://arxiv.org/pdf/1701.06171v4.pdf
|
CVPR 2019 6
|
[
"Adam Kortylewski",
"Aleksander Wieczorek",
"Mario Wieser",
"Clemens Blumer",
"Sonali Parbhoo",
"Andreas Morel-Forster",
"Volker Roth",
"Thomas Vetter"
] |
[
"Object",
"Transfer Learning"
] | 2017-01-22T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Kortylewski_Greedy_Structure_Learning_of_Hierarchical_Compositional_Models_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Kortylewski_Greedy_Structure_Learning_of_Hierarchical_Compositional_Models_CVPR_2019_paper.pdf
|
greedy-structure-learning-of-hierarchical-1
| null |
[] |
https://paperswithcode.com/paper/neural-inverse-rendering-for-general
|
1802.10328
| null | null |
Neural Inverse Rendering for General Reflectance Photometric Stereo
|
We present a novel convolutional neural network architecture for photometric
stereo (Woodham, 1980), a problem of recovering 3D object surface normals from
multiple images observed under varying illuminations. Despite its long history
in computer vision, the problem still shows fundamental challenges for surfaces
with unknown general reflectance properties (BRDFs). Leveraging deep neural
networks to learn complicated reflectance models is promising, but studies in
this direction are very limited due to difficulties in acquiring accurate
ground truth for training and also in designing networks invariant to
permutation of input images. In order to address these challenges, we propose a
physics based unsupervised learning framework where surface normals and BRDFs
are predicted by the network and fed into the rendering equation to synthesize
observed images. The network weights are optimized during testing by minimizing
reconstruction loss between observed and synthesized images. Thus, our learning
process does not require ground truth normals or even pre-training on external
images. Our method is shown to achieve the state-of-the-art performance on a
challenging real-world scene benchmark.
| null |
http://arxiv.org/abs/1802.10328v2
|
http://arxiv.org/pdf/1802.10328v2.pdf
|
ICML 2018 7
|
[
"Tatsunori Taniai",
"Takanori Maehara"
] |
[
"Inverse Rendering"
] | 2018-02-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1901
|
http://proceedings.mlr.press/v80/taniai18a/taniai18a.pdf
|
neural-inverse-rendering-for-general-1
| null |
[] |
https://paperswithcode.com/paper/cnn-based-detection-of-generic-constrast
|
1805.11318
| null | null |
CNN-Based Detection of Generic Constrast Adjustment with JPEG Post-processing
|
Detection of contrast adjustments in the presence of JPEG postprocessing is
known to be a challenging task. JPEG post processing is often applied
innocently, as JPEG is the most common image format, or it may correspond to a
laundering attack, when it is purposely applied to erase the traces of
manipulation. In this paper, we propose a CNN-based detector for generic
contrast adjustment, which is robust to JPEG compression. The proposed system
relies on a patch-based Convolutional Neural Network (CNN), trained to
distinguish pristine images from contrast adjusted images, for some selected
adjustment operators of different nature. Robustness to JPEG compression is
achieved by training the CNN with JPEG examples, compressed over a range of
Quality Factors (QFs). Experimental results show that the detector works very
well and scales well with respect to the adjustment type, yielding very good
performance under a large variety of unseen tonal adjustments.
| null |
http://arxiv.org/abs/1805.11318v1
|
http://arxiv.org/pdf/1805.11318v1.pdf
| null |
[
"Mauro Barni",
"Andrea Costanzo",
"Ehsan Nowroozi",
"Benedetta Tondi"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-networks-for-stock-price-prediction
|
1805.11317
| null | null |
Neural networks for stock price prediction
|
Due to the extremely volatile nature of financial markets, it is commonly
accepted that stock price prediction is a task full of challenge. However in
order to make profits or understand the essence of equity market, numerous
market participants or researchers try to forecast stock price using various
statistical, econometric or even neural network models. In this work, we survey
and compare the predictive power of five neural network models, namely, back
propagation (BP) neural network, radial basis function (RBF) neural network,
general regression neural network (GRNN), support vector machine regression
(SVMR), least squares support vector machine regresssion (LS-SVMR). We apply
the five models to make price prediction of three individual stocks, namely,
Bank of China, Vanke A and Kweichou Moutai. Adopting mean square error and
average absolute percentage error as criteria, we find BP neural network
consistently and robustly outperforms the other four models.
|
Due to the extremely volatile nature of financial markets, it is commonly accepted that stock price prediction is a task full of challenge.
|
http://arxiv.org/abs/1805.11317v1
|
http://arxiv.org/pdf/1805.11317v1.pdf
| null |
[
"Yue-Gang Song",
"Yu-Long Zhou",
"Ren-Jie Han"
] |
[
"Prediction",
"regression",
"Stock Price Prediction"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/intelligent-trainer-for-model-based
|
1805.09496
| null | null |
Intelligent Trainer for Model-Based Reinforcement Learning
|
Model-based reinforcement learning (MBRL) has been proposed as a promising alternative solution to tackle the high sampling cost challenge in the canonical reinforcement learning (RL), by leveraging a learned model to generate synthesized data for policy training purpose. The MBRL framework, nevertheless, is inherently limited by the convoluted process of jointly learning control policy and configuring hyper-parameters (e.g., global/local models, real and synthesized data, etc). The training process could be tedious and prohibitively costly. In this research, we propose an "reinforcement on reinforcement" (RoR) architecture to decompose the convoluted tasks into two layers of reinforcement learning. The inner layer is the canonical model-based RL training process environment (TPE), which learns the control policy for the underlying system and exposes interfaces to access states, actions and rewards. The outer layer presents an RL agent, called as AI trainer, to learn an optimal hyper-parameter configuration for the inner TPE. This decomposition approach provides a desirable flexibility to implement different trainer designs, called as "train the trainer". In our research, we propose and optimize two alternative trainer designs: 1) a uni-head trainer and 2) a multi-head trainer. Our proposed RoR framework is evaluated for five tasks in the OpenAI gym (i.e., Pendulum, Mountain Car, Reacher, Half Cheetah and Swimmer). Compared to three other baseline algorithms, our proposed Train-the-Trainer algorithm has a competitive performance in auto-tuning capability, with upto 56% expected sampling cost saving without knowing the best parameter setting in advance. The proposed trainer framework can be easily extended to other cases in which the hyper-parameter tuning is costly.
|
Model-based reinforcement learning (MBRL) has been proposed as a promising alternative solution to tackle the high sampling cost challenge in the canonical reinforcement learning (RL), by leveraging a learned model to generate synthesized data for policy training purpose.
|
https://arxiv.org/abs/1805.09496v6
|
https://arxiv.org/pdf/1805.09496v6.pdf
| null |
[
"Yuanlong Li",
"Linsen Dong",
"Xin Zhou",
"Yonggang Wen",
"Kyle Guan"
] |
[
"model",
"Model-based Reinforcement Learning",
"OpenAI Gym",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/overcoming-catastrophic-forgetting-with-hard
|
1801.01423
| null | null |
Overcoming catastrophic forgetting with hard attention to the task
|
Catastrophic forgetting occurs when a neural network loses the information
learned in a previous task after training on subsequent tasks. This problem
remains a hurdle for artificial intelligence systems with sequential learning
capabilities. In this paper, we propose a task-based hard attention mechanism
that preserves previous tasks' information without affecting the current task's
learning. A hard attention mask is learned concurrently to every task, through
stochastic gradient descent, and previous masks are exploited to condition such
learning. We show that the proposed mechanism is effective for reducing
catastrophic forgetting, cutting current rates by 45 to 80%. We also show that
it is robust to different hyperparameter choices, and that it offers a number
of monitoring capabilities. The approach features the possibility to control
both the stability and compactness of the learned knowledge, which we believe
makes it also attractive for online learning or network compression
applications.
|
In this paper, we propose a task-based hard attention mechanism that preserves previous tasks' information without affecting the current task's learning.
|
http://arxiv.org/abs/1801.01423v3
|
http://arxiv.org/pdf/1801.01423v3.pdf
|
ICML 2018 7
|
[
"Joan Serrà",
"Dídac Surís",
"Marius Miron",
"Alexandros Karatzoglou"
] |
[
"Continual Learning",
"Hard Attention"
] | 2018-01-04T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2155
|
http://proceedings.mlr.press/v80/serra18a/serra18a.pdf
|
overcoming-catastrophic-forgetting-with-hard-1
| null |
[] |
https://paperswithcode.com/paper/e-commerce-anomaly-detection-a-bayesian-semi
|
1804.03836
| null | null |
E-commerce Anomaly Detection: A Bayesian Semi-Supervised Tensor Decomposition Approach using Natural Gradients
|
Anomaly Detection has several important applications. In this paper, our
focus is on detecting anomalies in seller-reviewer data using tensor
decomposition. While tensor-decomposition is mostly unsupervised, we formulate
Bayesian semi-supervised tensor decomposition to take advantage of sparse
labeled data. In addition, we use Polya-Gamma data augmentation for the
semi-supervised Bayesian tensor decomposition. Finally, we show that the
P\'olya-Gamma formulation simplifies calculation of the Fisher information
matrix for partial natural gradient learning. Our experimental results show
that our semi-supervised approach outperforms state of the art unsupervised
baselines. And that the partial natural gradient learning outperforms
stochastic gradient learning and Online-EM with sufficient statistics.
| null |
http://arxiv.org/abs/1804.03836v3
|
http://arxiv.org/pdf/1804.03836v3.pdf
| null |
[
"Anil R. Yelundur",
"Srinivasan H. Sengamedu",
"Bamdev Mishra"
] |
[
"Anomaly Detection",
"Data Augmentation",
"Tensor Decomposition"
] | 2018-04-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-data-augmentation-for-brain-tumor
|
1805.11291
| null | null |
Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks
|
There is a common belief that the successful training of deep neural networks
requires many annotated training samples, which are often expensive and
difficult to obtain especially in the biomedical imaging field. While it is
often easy for researchers to use data augmentation to expand the size of
training sets, constructing and generating generic augmented data that is able
to teach the network the desired invariance and robustness properties using
traditional data augmentation techniques is challenging in practice. In this
paper, we propose a novel automatic data augmentation method that uses
generative adversarial networks to learn augmentations that enable machine
learning based method to learn the available annotated samples more
efficiently. The architecture consists of a coarse-to-fine generator to capture
the manifold of the training sets and generate generic augmented data. In our
experiments, we show the efficacy of our approach on a Magnetic Resonance
Imaging (MRI) image, achieving improvements of 3.5% Dice coefficient on the
BRATS15 Challenge dataset as compared to traditional augmentation approaches.
Also, our proposed method successfully boosts a common segmentation network to
reach the state-of-the-art performance on the BRATS15 Challenge.
| null |
http://arxiv.org/abs/1805.11291v2
|
http://arxiv.org/pdf/1805.11291v2.pdf
| null |
[
"Tony C. W. Mok",
"Albert C. S. Chung"
] |
[
"Brain Tumor Segmentation",
"Data Augmentation",
"Tumor Segmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kong-kernels-for-ordered-neighborhood-graphs
|
1805.10014
| null | null |
KONG: Kernels for ordered-neighborhood graphs
|
We present novel graph kernels for graphs with node and edge labels that have
ordered neighborhoods, i.e. when neighbor nodes follow an order. Graphs with
ordered neighborhoods are a natural data representation for evolving graphs
where edges are created over time, which induces an order. Combining
convolutional subgraph kernels and string kernels, we design new scalable
algorithms for generation of explicit graph feature maps using sketching
techniques. We obtain precise bounds for the approximation accuracy and
computational complexity of the proposed approaches and demonstrate their
applicability on real datasets. In particular, our experiments demonstrate that
neighborhood ordering results in more informative features. For the special
case of general graphs, i.e. graphs without ordered neighborhoods, the new
graph kernels yield efficient and simple algorithms for the comparison of label
distributions between graphs.
|
We present novel graph kernels for graphs with node and edge labels that have ordered neighborhoods, i. e. when neighbor nodes follow an order.
|
http://arxiv.org/abs/1805.10014v2
|
http://arxiv.org/pdf/1805.10014v2.pdf
|
NeurIPS 2018 12
|
[
"Moez Draief",
"Konstantin Kutzkov",
"Kevin Scaman",
"Milan Vojnovic"
] |
[] | 2018-05-25T00:00:00 |
http://papers.nips.cc/paper/7660-kong-kernels-for-ordered-neighborhood-graphs
|
http://papers.nips.cc/paper/7660-kong-kernels-for-ordered-neighborhood-graphs.pdf
|
kong-kernels-for-ordered-neighborhood-graphs-1
| null |
[] |
https://paperswithcode.com/paper/multi-hop-inference-for-sentence-level
|
1805.11267
| null | null |
Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?
|
Question Answering for complex questions is often modeled as a graph
construction or traversal task, where a solver must build or traverse a graph
of facts that answer and explain a given question. This "multi-hop" inference
has been shown to be extremely challenging, with few models able to aggregate
more than two facts before being overwhelmed by "semantic drift", or the
tendency for long chains of facts to quickly drift off topic. This is a major
barrier to current inference models, as even elementary science questions
require an average of 4 to 6 facts to answer and explain. In this work we
empirically characterize the difficulty of building or traversing a graph of
sentences connected by lexical overlap, by evaluating chance sentence
aggregation quality through 9,784 manually-annotated judgments across knowledge
graphs built from three free-text corpora (including study guides and Simple
Wikipedia). We demonstrate semantic drift tends to be high and aggregation
quality low, at between 0.04% and 3%, and highlight scenarios that maximize the
likelihood of meaningfully combining information.
| null |
http://arxiv.org/abs/1805.11267v1
|
http://arxiv.org/pdf/1805.11267v1.pdf
|
WS 2018 6
|
[
"Peter Jansen"
] |
[
"graph construction",
"Knowledge Graphs",
"Question Answering",
"Science Question Answering",
"Sentence"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/W18-1703
|
https://aclanthology.org/W18-1703.pdf
|
multi-hop-inference-for-sentence-level-1
| null |
[] |
https://paperswithcode.com/paper/disentangling-by-partitioning-a
|
1805.11264
| null | null |
Disentangling by Partitioning: A Representation Learning Framework for Multimodal Sensory Data
|
Multimodal sensory data resembles the form of information perceived by humans
for learning, and are easy to obtain in large quantities. Compared to unimodal
data, synchronization of concepts between modalities in such data provides
supervision for disentangling the underlying explanatory factors of each
modality. Previous work leveraging multimodal data has mainly focused on
retaining only the modality-invariant factors while discarding the rest. In
this paper, we present a partitioned variational autoencoder (PVAE) and several
training objectives to learn disentangled representations, which encode not
only the shared factors, but also modality-dependent ones, into separate latent
variables. Specifically, PVAE integrates a variational inference framework and
a multimodal generative model that partitions the explanatory factors and
conditions only on the relevant subset of them for generation. We evaluate our
model on two parallel speech/image datasets, and demonstrate its ability to
learn disentangled representations by qualitatively exploring within-modality
and cross-modality conditional generation with semantics and styles specified
by examples. For quantitative analysis, we evaluate the classification accuracy
of automatically discovered semantic units. Our PVAE can achieve over 99%
accuracy on both modalities.
| null |
http://arxiv.org/abs/1805.11264v1
|
http://arxiv.org/pdf/1805.11264v1.pdf
| null |
[
"Wei-Ning Hsu",
"James Glass"
] |
[
"Representation Learning",
"Variational Inference"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/statistical-mechanical-analysis-of-sparse
|
1805.11259
| null | null |
Statistical mechanical analysis of sparse linear regression as a variable selection problem
|
An algorithmic limit of compressed sensing or related variable-selection
problems is analytically evaluated when a design matrix is given by an
overcomplete random matrix. The replica method from statistical mechanics is
employed to derive the result. The analysis is conducted through evaluation of
the entropy, an exponential rate of the number of combinations of variables
giving a specific value of fit error to given data which is assumed to be
generated from a linear process using the design matrix. This yields the
typical achievable limit of the fit error when solving a representative
$\ell_0$ problem and includes the presence of unfavourable phase transitions
preventing local search algorithms from reaching the minimum-error
configuration. The associated phase diagrams are presented. A noteworthy
outcome of the phase diagrams is that there exists a wide parameter region
where any phase transition is absent from the high temperature to the lowest
temperature at which the minimum-error configuration or the ground state is
reached. This implies that certain local search algorithms can find the ground
state with moderate computational costs in that region. Another noteworthy
result is the presence of the random first-order transition in the strong noise
case. The theoretical evaluation of the entropy is confirmed by extensive
numerical methods using the exchange Monte Carlo and the multi-histogram
methods. Another numerical test based on a metaheuristic optimisation algorithm
called simulated annealing is conducted, which well supports the theoretical
predictions on the local search algorithms. In the successful region with no
phase transition, the computational cost of the simulated annealing to reach
the ground state is estimated as the third order polynomial of the model
dimensionality.
| null |
http://arxiv.org/abs/1805.11259v2
|
http://arxiv.org/pdf/1805.11259v2.pdf
| null |
[
"Tomoyuki Obuchi",
"Yoshinori Nakanishi-Ohno",
"Masato Okada",
"Yoshiyuki Kabashima"
] |
[
"compressed sensing",
"regression",
"Variable Selection"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deeply-learning-molecular-structure-property
|
1805.10988
| null | null |
Deeply learning molecular structure-property relationships using attention- and gate-augmented graph convolutional network
|
Molecular structure-property relationships are key to molecular engineering
for materials and drug discovery. The rise of deep learning offers a new viable
solution to elucidate the structure-property relationships directly from
chemical data. Here we show that the performance of graph convolutional
networks (GCNs) for the prediction of molecular properties can be improved by
incorporating attention and gate mechanisms. The attention mechanism enables a
GCN to identify atoms in different environments. The gated skip-connection
further improves the GCN by updating feature maps at an appropriate rate. We
demonstrate that the resulting attention- and gate-augmented GCN could extract
better structural features related to a target molecular property such as
solubility, polarity, synthetic accessibility and photovoltaic efficiency
compared to the vanilla GCN. More interestingly, it identified two distinct
parts of molecules as essential structural features for high photovoltaic
efficiency, and each of them coincided with the areas of donor and acceptor
orbitals for charge-transfer excitations, respectively. As a result, the new
model could accurately predict molecular properties and place molecules with
similar properties close to each other in a well-trained latent space, which is
critical for successful molecular engineering.
|
Molecular structure-property relationships are key to molecular engineering for materials and drug discovery.
|
http://arxiv.org/abs/1805.10988v3
|
http://arxiv.org/pdf/1805.10988v3.pdf
| null |
[
"Seongok Ryu",
"Jaechang Lim",
"Seung Hwan Hong",
"Woo Youn Kim"
] |
[
"Drug Discovery"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Graph Convolutional Network**, or **GCN**, is an approach for semi-supervised learning on graph-structured data. It is based on an efficient variant of [convolutional neural networks](https://paperswithcode.com/methods/category/convolutional-neural-networks) which operate directly on graphs. The choice of convolutional architecture is motivated via a localized first-order approximation of spectral graph convolutions. The model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes.",
"full_name": "Graph Convolutional Network",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "The Graph Methods include neural network architectures for learning on graphs with prior structure information, popularly called as Graph Neural Networks (GNNs).\r\n\r\nRecently, deep learning approaches are being extended to work on graph-structured data, giving rise to a series of graph neural networks addressing different challenges. Graph neural networks are particularly useful in applications where data are generated from non-Euclidean domains and represented as graphs with complex relationships. \r\n\r\nSome tasks where GNNs are widely used include [node classification](https://paperswithcode.com/task/node-classification), [graph classification](https://paperswithcode.com/task/graph-classification), [link prediction](https://paperswithcode.com/task/link-prediction), and much more. \r\n\r\nIn the taxonomy presented by [Wu et al. (2019)](https://paperswithcode.com/paper/a-comprehensive-survey-on-graph-neural), graph neural networks can be divided into four categories: **recurrent graph neural networks**, **convolutional graph neural networks**, **graph autoencoders**, and **spatial-temporal graph neural networks**.\r\n\r\nImage source: [A Comprehensive Survey on Graph NeuralNetworks](https://arxiv.org/pdf/1901.00596.pdf)",
"name": "Graph Models",
"parent": null
},
"name": "GCN",
"source_title": "Semi-Supervised Classification with Graph Convolutional Networks",
"source_url": "http://arxiv.org/abs/1609.02907v4"
}
] |
https://paperswithcode.com/paper/taxi-demand-forecasting-a-hedge-based
|
1805.06619
| null | null |
Taxi demand forecasting: A HEDGE based tessellation strategy for improved accuracy
|
A key problem in location-based modeling and forecasting lies in identifying
suitable spatial and temporal resolutions. In particular, judicious spatial
partitioning can play a significant role in enhancing the performance of
location-based forecasting models. In this work, we investigate two widely used
tessellation strategies for partitioning city space, in the context of
real-time taxi demand forecasting. Our study compares (i) Geohash tessellation,
and (ii) Voronoi tessellation, using two distinct taxi demand datasets, over
multiple time scales. For the purpose of comparison, we employ classical
time-series tools to model the spatio-temporal demand. Our study finds that the
performance of each tessellation strategy is highly dependent on the city
geography, spatial distribution of the data, and the time of the day, and that
neither strategy is found to perform optimally across the forecast horizon. We
propose a hybrid tessellation algorithm that picks the best tessellation
strategy at each instant, based on their performance in the recent past. Our
hybrid algorithm is a non-stationary variant of the well-known HEDGE algorithm
for choosing the best advice from multiple experts. We show that the hybrid
tessellation strategy performs consistently better than either of the two
strategies across the data sets considered, at multiple time scales, and with
different performance metrics. We achieve an average accuracy of above 80% per
km^2 for both data sets considered at 60 minute aggregation levels.
| null |
http://arxiv.org/abs/1805.06619v2
|
http://arxiv.org/pdf/1805.06619v2.pdf
| null |
[
"Neema Davis",
"Gaurav Raina",
"Krishna Jagannathan"
] |
[
"Demand Forecasting",
"Time Series Analysis"
] | 2018-05-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/microscopy-cell-segmentation-via
|
1805.11247
| null | null |
Microscopy Cell Segmentation via Convolutional LSTM Networks
|
Live cell microscopy sequences exhibit complex spatial structures and
complicated temporal behaviour, making their analysis a challenging task.
Considering cell segmentation problem, which plays a significant role in the
analysis, the spatial properties of the data can be captured using
Convolutional Neural Networks (CNNs). Recent approaches show promising
segmentation results using convolutional encoder-decoders such as the U-Net.
Nevertheless, these methods are limited by their inability to incorporate
temporal information, that can facilitate segmentation of individual touching
cells or of cells that are partially visible. In order to exploit cell dynamics
we propose a novel segmentation architecture which integrates Convolutional
Long Short Term Memory (C-LSTM) with the U-Net. The network's unique
architecture allows it to capture multi-scale, compact, spatio-temporal
encoding in the C-LSTMs memory units. The method was evaluated on the Cell
Tracking Challenge and achieved state-of-the-art results (1st on Fluo-N2DH-SIM+
and 2nd on DIC-C2DL-HeLa datasets) The code is freely available at:
https://github.com/arbellea/LSTM-UNet.git
|
Live cell microscopy sequences exhibit complex spatial structures and complicated temporal behaviour, making their analysis a challenging task.
|
http://arxiv.org/abs/1805.11247v2
|
http://arxiv.org/pdf/1805.11247v2.pdf
| null |
[
"Assaf Arbelle",
"Tammy Riklin Raviv"
] |
[
"Cell Segmentation",
"Cell Tracking",
"Segmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/on-robust-trimming-of-bayesian-network
|
1805.11243
| null | null |
On Robust Trimming of Bayesian Network Classifiers
|
This paper considers the problem of removing costly features from a Bayesian
network classifier. We want the classifier to be robust to these changes, and
maintain its classification behavior. To this end, we propose a closeness
metric between Bayesian classifiers, called the expected classification
agreement (ECA). Our corresponding trimming algorithm finds an optimal subset
of features and a new classification threshold that maximize the expected
agreement, subject to a budgetary constraint. It utilizes new theoretical
insights to perform branch-and-bound search in the space of feature sets, while
computing bounds on the ECA. Our experiments investigate both the runtime cost
of trimming and its effect on the robustness and accuracy of the final
classifier.
|
To this end, we propose a closeness metric between Bayesian classifiers, called the expected classification agreement (ECA).
|
http://arxiv.org/abs/1805.11243v1
|
http://arxiv.org/pdf/1805.11243v1.pdf
| null |
[
"YooJung Choi",
"Guy Van Den Broeck"
] |
[
"Classification",
"General Classification"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/propositional-knowledge-representation-and
|
1705.10899
| null | null |
Propositional Knowledge Representation and Reasoning in Restricted Boltzmann Machines
|
While knowledge representation and reasoning are considered the keys for
human-level artificial intelligence, connectionist networks have been shown
successful in a broad range of applications due to their capacity for robust
learning and flexible inference under uncertainty. The idea of representing
symbolic knowledge in connectionist networks has been well-received and
attracted much attention from research community as this can establish a
foundation for integration of scalable learning and sound reasoning. In
previous work, there exist a number of approaches that map logical inference
rules with feed-forward propagation of artificial neural networks (ANN).
However, the discriminative structure of an ANN requires the separation of
input/output variables which makes it difficult for general reasoning where any
variables should be inferable. Other approaches address this issue by employing
generative models such as symmetric connectionist networks, however, they are
difficult and convoluted. In this paper we propose a novel method to represent
propositional formulas in restricted Boltzmann machines which is less complex,
especially in the cases of logical implications and Horn clauses. An
integration system is then developed and evaluated in real datasets which shows
promising results.
| null |
http://arxiv.org/abs/1705.10899v3
|
http://arxiv.org/pdf/1705.10899v3.pdf
| null |
[
"Son N. Tran"
] |
[] | 2017-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/truncated-horizon-policy-search-combining
|
1805.11240
| null |
ryUlhzWCZ
|
Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning
|
In this paper, we propose to combine imitation and reinforcement learning via
the idea of reward shaping using an oracle. We study the effectiveness of the
near-optimal cost-to-go oracle on the planning horizon and demonstrate that the
cost-to-go oracle shortens the learner's planning horizon as function of its
accuracy: a globally optimal oracle can shorten the planning horizon to one,
leading to a one-step greedy Markov Decision Process which is much easier to
optimize, while an oracle that is far away from the optimality requires
planning over a longer horizon to achieve near-optimal performance. Hence our
new insight bridges the gap and interpolates between imitation learning and
reinforcement learning. Motivated by the above mentioned insights, we propose
Truncated HORizon Policy Search (THOR), a method that focuses on searching for
policies that maximize the total reshaped reward over a finite planning horizon
when the oracle is sub-optimal. We experimentally demonstrate that a
gradient-based implementation of THOR can achieve superior performance compared
to RL baselines and IL baselines even when the oracle is sub-optimal.
| null |
http://arxiv.org/abs/1805.11240v1
|
http://arxiv.org/pdf/1805.11240v1.pdf
|
ICLR 2018 1
|
[
"Wen Sun",
"J. Andrew Bagnell",
"Byron Boots"
] |
[
"Imitation Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-29T00:00:00 |
https://openreview.net/forum?id=ryUlhzWCZ
|
https://openreview.net/pdf?id=ryUlhzWCZ
|
truncated-horizon-policy-search-combining-1
| null |
[] |
https://paperswithcode.com/paper/fully-convolutional-measurement-network-for
|
1712.01641
| null | null |
Fully Convolutional Measurement Network for Compressive Sensing Image Reconstruction
|
Recently, deep learning methods have made a significant improvement in
compressive sensing image reconstruction task. In the existing methods, the
scene is measured block by block due to the high computational complexity. This
results in block-effect of the recovered images. In this paper, we propose a
fully convolutional measurement network, where the scene is measured as a
whole. The proposed method powerfully removes the block-effect since the
structure information of scene images is preserved. To make the measure more
flexible, the measurement and the recovery parts are jointly trained. From the
experiments, it is shown that the results by the proposed method outperforms
those by the existing methods in PSNR, SSIM, and visual effect.
|
Recently, deep learning methods have made a significant improvement in compressive sensing image reconstruction task.
|
http://arxiv.org/abs/1712.01641v2
|
http://arxiv.org/pdf/1712.01641v2.pdf
| null |
[
"Jiang Du",
"Xuemei Xie",
"Chenye Wang",
"Guangming Shi",
"Xun Xu",
"Yu-Xiang Wang"
] |
[
"Compressive Sensing",
"Image Reconstruction",
"SSIM"
] | 2017-11-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/review-of-applications-of-generalized
|
1805.11236
| null | null |
Review of Applications of Generalized Regression Neural Networks in Identification and Control of Dynamic Systems
|
This paper depicts a brief revision of Generalized Regression Neural Networks
(GRNN) applications in system identification and control of dynamic systems. In
addition, a comparison study between the performance of back-propagation neural
networks and GRNN is presented for system identification problems. The results
of the comparison confirm that GRNN has shorter training time and higher
accuracy than the counterpart back-propagation neural networks.
| null |
http://arxiv.org/abs/1805.11236v1
|
http://arxiv.org/pdf/1805.11236v1.pdf
| null |
[
"Ahmad Jobran Al-Mahasneh",
"Sreenatha G. Anavatti",
"Matthew A. Garratt"
] |
[
"regression"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-convergence-for-stochastic-and
|
1803.02922
| null | null |
Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit
|
Modern supervised learning techniques, particularly those using deep nets,
involve fitting high dimensional labelled data sets with functions containing
very large numbers of parameters. Much of this work is empirical. Interesting
phenomena have been observed that require theoretical explanations; however the
non-convexity of the loss functions complicates the analysis. Recently it has
been proposed that the success of these techniques rests partly in the
effectiveness of the simple stochastic gradient descent algorithm in the so
called interpolation limit in which all labels are fit perfectly. This analysis
is made possible since the SGD algorithm reduces to a stochastic linear system
near the interpolating minimum of the loss function. Here we exploit this
insight by presenting and analyzing a new distributed algorithm for gradient
descent, also in the interpolating limit. The distributed SGD algorithm
presented in the paper corresponds to gradient descent applied to a simple
penalized distributed loss function, $L({\bf w}_1,...,{\bf w}_n) = \Sigma_i
l_i({\bf w}_i) + \mu \sum_{<i,j>}|{\bf w}_i-{\bf w}_j|^2$. Here each node holds
only one sample, and its own parameter vector. The notation $<i,j>$ denotes
edges of a connected graph defining the links between nodes. It is shown that
this distributed algorithm converges linearly (ie the error reduces
exponentially with iteration number), with a rate
$1-\frac{\eta}{n}\lambda_{min}(H)<R<1$ where $\lambda_{min}(H)$ is the smallest
nonzero eigenvalue of the sample covariance or the Hessian H. In contrast with
previous usage of similar penalty functions to enforce consensus between nodes,
in the interpolating limit it is not required to take the penalty parameter to
infinity for consensus to occur. The analysis further reinforces the utility of
the interpolation limit in the theoretical treatment of modern machine learning
algorithms.
| null |
http://arxiv.org/abs/1803.02922v3
|
http://arxiv.org/pdf/1803.02922v3.pdf
| null |
[
"Partha P. Mitra"
] |
[] | 2018-03-08T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/table-to-text-describing-table-region-with
|
1805.11234
| null | null |
Table-to-Text: Describing Table Region with Natural Language
|
In this paper, we present a generative model to generate a natural language
sentence describing a table region, e.g., a row. The model maps a row from a
table to a continuous vector and then generates a natural language sentence by
leveraging the semantics of a table. To deal with rare words appearing in a
table, we develop a flexible copying mechanism that selectively replicates
contents from the table in the output sequence. Extensive experiments
demonstrate the accuracy of the model and the power of the copying mechanism.
On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the
current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to
39.12, respectively. Furthermore, we introduce an open-domain dataset
WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our
model achieves a BLEU-4 score of 38.23, which outperforms template based and
language model based approaches.
| null |
http://arxiv.org/abs/1805.11234v1
|
http://arxiv.org/pdf/1805.11234v1.pdf
| null |
[
"Junwei Bao",
"Duyu Tang",
"Nan Duan",
"Zhao Yan",
"Yuanhua Lv",
"Ming Zhou",
"Tiejun Zhao"
] |
[
"Language Modeling",
"Language Modelling",
"Sentence"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/retraining-based-iterative-weight
|
1805.11233
| null | null |
Retraining-Based Iterative Weight Quantization for Deep Neural Networks
|
Model compression has gained a lot of attention due to its ability to reduce
hardware resource requirements significantly while maintaining accuracy of
DNNs. Model compression is especially useful for memory-intensive recurrent
neural networks because smaller memory footprint is crucial not only for
reducing storage requirement but also for fast inference operations.
Quantization is known to be an effective model compression method and
researchers are interested in minimizing the number of bits to represent
parameters. In this work, we introduce an iterative technique to apply
quantization, presenting high compression ratio without any modifications to
the training algorithm. In the proposed technique, weight quantization is
followed by retraining the model with full precision weights. We show that
iterative retraining generates new sets of weights which can be quantized with
decreasing quantization loss at each iteration. We also show that quantization
is efficiently able to leverage pruning, another effective model compression
method. Implementation issues on combining the two methods are also addressed.
Our experimental results demonstrate that an LSTM model using 1-bit quantized
weights is sufficient for PTB dataset without any accuracy degradation while
previous methods demand at least 2-4 bits for quantized weights.
| null |
http://arxiv.org/abs/1805.11233v1
|
http://arxiv.org/pdf/1805.11233v1.pdf
| null |
[
"Dongsoo Lee",
"Byeongwook Kim"
] |
[
"Model Compression",
"Quantization"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/currency-exchange-prediction-using-machine
|
1805.11232
| null | null |
Currency exchange prediction using machine learning, genetic algorithms and technical analysis
|
Technical analysis is used to discover investment opportunities. To test this
hypothesis we propose an hybrid system using machine learning techniques
together with genetic algorithms. Using technical analysis there are more ways
to represent a currency exchange time series than the ones it is possible to
test computationally, i.e., it is unfeasible to search the whole input feature
space thus a genetic algorithm is an alternative. In this work, an architecture
for automatic feature selection is proposed to optimize the cross validated
performance estimation of a Naive Bayes model using a genetic algorithm. The
proposed architecture improves the return on investment of the unoptimized
system from 0,43% to 10,29% in the validation set. The features selected and
the model decision boundary are visualized using the algorithm t-Distributed
Stochastic Neighbor embedding.
| null |
http://arxiv.org/abs/1805.11232v1
|
http://arxiv.org/pdf/1805.11232v1.pdf
| null |
[
"Gonçalo Abreu",
"Rui Neves",
"Nuno Horta"
] |
[
"BIG-bench Machine Learning",
"feature selection",
"Time Series",
"Time Series Analysis"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/getting-to-know-low-light-images-with-the
|
1805.11227
| null | null |
Getting to Know Low-light Images with The Exclusively Dark Dataset
|
Low-light is an inescapable element of our daily surroundings that greatly
affects the efficiency of our vision. Research works on low-light has seen a
steady growth, particularly in the field of image enhancement, but there is
still a lack of a go-to database as benchmark. Besides, research fields that
may assist us in low-light environments, such as object detection, has glossed
over this aspect even though breakthroughs-after-breakthroughs had been
achieved in recent years, most noticeably from the lack of low-light data (less
than 2% of the total images) in successful public benchmark dataset such as
PASCAL VOC, ImageNet, and Microsoft COCO. Thus, we propose the Exclusively Dark
dataset to elevate this data drought, consisting exclusively of ten different
types of low-light images (i.e. low, ambient, object, single, weak, strong,
screen, window, shadow and twilight) captured in visible light only with image
and object level annotations. Moreover, we share insightful findings in regards
to the effects of low-light on the object detection task by analyzing
visualizations of both hand-crafted and learned features. Most importantly, we
found that the effects of low-light reaches far deeper into the features than
can be solved by simple "illumination invariance'". It is our hope that this
analysis and the Exclusively Dark dataset can encourage the growth in low-light
domain researches on different fields. The Exclusively Dark dataset with its
annotation is available at
https://github.com/cs-chan/Exclusively-Dark-Image-Dataset
|
Thus, we propose the Exclusively Dark dataset to elevate this data drought, consisting exclusively of ten different types of low-light images (i. e. low, ambient, object, single, weak, strong, screen, window, shadow and twilight) captured in visible light only with image and object level annotations.
|
http://arxiv.org/abs/1805.11227v1
|
http://arxiv.org/pdf/1805.11227v1.pdf
| null |
[
"Yuen Peng Loh",
"Chee Seng Chan"
] |
[
"Image Enhancement",
"Low-Light Image Enhancement",
"Object",
"object-detection",
"Object Detection"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distilling-knowledge-for-search-based
|
1805.11224
| null | null |
Distilling Knowledge for Search-based Structured Prediction
|
Many natural language processing tasks can be modeled into structured
prediction and solved as a search problem. In this paper, we distill an
ensemble of multiple models trained with different initialization into a single
model. In addition to learning to match the ensemble's probability output on
the reference states, we also use the ensemble to explore the search space and
learn from the encountered states in the exploration. Experimental results on
two typical search-based structured prediction tasks -- transition-based
dependency parsing and neural machine translation show that distillation can
effectively improve the single model's performance and the final model achieves
improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks
respectively over strong baselines and it outperforms the greedy structured
prediction models in previous literatures.
|
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
|
http://arxiv.org/abs/1805.11224v1
|
http://arxiv.org/pdf/1805.11224v1.pdf
|
ACL 2018 7
|
[
"Yijia Liu",
"Wanxiang Che",
"Huaipeng Zhao",
"Bing Qin",
"Ting Liu"
] |
[
"Dependency Parsing",
"Machine Translation",
"Prediction",
"Structured Prediction",
"Transition-Based Dependency Parsing",
"Translation"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/P18-1129
|
https://aclanthology.org/P18-1129.pdf
|
distilling-knowledge-for-search-based-1
| null |
[] |
https://paperswithcode.com/paper/video-anomaly-detection-and-localization-via
|
1805.11223
| null | null |
Video Anomaly Detection and Localization via Gaussian Mixture Fully Convolutional Variational Autoencoder
|
We present a novel end-to-end partially supervised deep learning approach for
video anomaly detection and localization using only normal samples. The insight
that motivates this study is that the normal samples can be associated with at
least one Gaussian component of a Gaussian Mixture Model (GMM), while anomalies
either do not belong to any Gaussian component. The method is based on Gaussian
Mixture Variational Autoencoder, which can learn feature representations of the
normal samples as a Gaussian Mixture Model trained using deep learning. A Fully
Convolutional Network (FCN) that does not contain a fully-connected layer is
employed for the encoder-decoder structure to preserve relative spatial
coordinates between the input image and the output feature map. Based on the
joint probabilities of each of the Gaussian mixture components, we introduce a
sample energy based method to score the anomaly of image test patches. A
two-stream network framework is employed to combine the appearance and motion
anomalies, using RGB frames for the former and dynamic flow images, for the
latter. We test our approach on two popular benchmarks (UCSD Dataset and Avenue
Dataset). The experimental results verify the superiority of our method
compared to the state of the arts.
| null |
http://arxiv.org/abs/1805.11223v1
|
http://arxiv.org/pdf/1805.11223v1.pdf
| null |
[
"Yaxiang Fan",
"Gongjian Wen",
"Deren Li",
"Shaohua Qiu",
"Martin D. Levine"
] |
[
"Anomaly Detection",
"Decoder",
"Video Anomaly Detection"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-alignment-of-embeddings-with
|
1805.11222
| null | null |
Unsupervised Alignment of Embeddings with Wasserstein Procrustes
|
A library for Multilingual Unsupervised or Supervised word Embeddings
|
A library for Multilingual Unsupervised or Supervised word Embeddings
|
http://arxiv.org/abs/1805.11222v1
|
http://arxiv.org/pdf/1805.11222v1.pdf
| null |
[
"Edouard Grave",
"Armand Joulin",
"Quentin Berthet"
] |
[
"Word Embeddings"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/regularizing-deep-networks-using-efficient
|
1705.07819
| null | null |
Regularizing deep networks using efficient layerwise adversarial training
|
Adversarial training has been shown to regularize deep neural networks in
addition to increasing their robustness to adversarial examples. However, its
impact on very deep state of the art networks has not been fully investigated.
In this paper, we present an efficient approach to perform adversarial training
by perturbing intermediate layer activations and study the use of such
perturbations as a regularizer during training. We use these perturbations to
train very deep models such as ResNets and show improvement in performance both
on adversarial and original test data. Our experiments highlight the benefits
of perturbing intermediate layer activations compared to perturbing only the
inputs. The results on CIFAR-10 and CIFAR-100 datasets show the merits of the
proposed adversarial training approach. Additional results on WideResNets show
that our approach provides significant improvement in classification accuracy
for a given base model, outperforming dropout and other base models of larger
size.
| null |
http://arxiv.org/abs/1705.07819v2
|
http://arxiv.org/pdf/1705.07819v2.pdf
| null |
[
"Swami Sankaranarayanan",
"Arpit Jain",
"Rama Chellappa",
"Ser Nam Lim"
] |
[] | 2017-05-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] |
https://paperswithcode.com/paper/a-neurobiological-evaluation-metric-for
|
1805.10726
| null | null |
A Neurobiological Evaluation Metric for Neural Network Model Search
|
Neuroscience theory posits that the brain's visual system coarsely identifies
broad object categories via neural activation patterns, with similar objects
producing similar neural responses. Artificial neural networks also have
internal activation behavior in response to stimuli. We hypothesize that
networks exhibiting brain-like activation behavior will demonstrate brain-like
characteristics, e.g., stronger generalization capabilities. In this paper we
introduce a human-model similarity (HMS) metric, which quantifies the
similarity of human fMRI and network activation behavior. To calculate HMS,
representational dissimilarity matrices (RDMs) are created as abstractions of
activation behavior, measured by the correlations of activations to stimulus
pairs. HMS is then the correlation between the fMRI RDM and the neural network
RDM across all stimulus pairs. We test the metric on unsupervised predictive
coding networks, which specifically model visual perception, and assess the
metric for statistical significance over a large range of hyperparameters. Our
experiments show that networks with increased human-model similarity are
correlated with better performance on two computer vision tasks: next frame
prediction and object matching accuracy. Further, HMS identifies networks with
high performance on both tasks. An unexpected secondary finding is that the
metric can be employed during training as an early-stopping mechanism.
|
In this paper we introduce a human-model similarity (HMS) metric, which quantifies the similarity of human fMRI and network activation behavior.
|
http://arxiv.org/abs/1805.10726v4
|
http://arxiv.org/pdf/1805.10726v4.pdf
|
CVPR 2019 6
|
[
"Nathaniel Blanchard",
"Jeffery Kinnison",
"Brandon RichardWebster",
"Pouya Bashivan",
"Walter J. Scheirer"
] |
[] | 2018-05-28T00:00:00 |
http://openaccess.thecvf.com/content_CVPR_2019/html/Blanchard_A_Neurobiological_Evaluation_Metric_for_Neural_Network_Model_Search_CVPR_2019_paper.html
|
http://openaccess.thecvf.com/content_CVPR_2019/papers/Blanchard_A_Neurobiological_Evaluation_Metric_for_Neural_Network_Model_Search_CVPR_2019_paper.pdf
|
a-neurobiological-evaluation-metric-for-1
| null |
[] |
https://paperswithcode.com/paper/automatic-exposure-compensation-for-multi
|
1805.11211
| null | null |
Automatic Exposure Compensation for Multi-Exposure Image Fusion
|
This paper proposes a novel luminance adjustment method based on automatic
exposure compensation for multi-exposure image fusion. Multi-exposure image
fusion is a method to produce images without saturation regions, by using
photos with different exposures. In conventional works, it has been pointed out
that the quality of those multi-exposure images can be improved by adjusting
the luminance of them. However, how to determine the degree of adjustment has
never been discussed. This paper therefore proposes a way to automatically
determines the degree on the basis of the luminance distribution of input
multi-exposure images. Moreover, new weights, called "simple weights", for
image fusion are also considered for the proposed luminance adjustment method.
Experimental results show that the multi-exposure images adjusted by the
proposed method have better quality than the input multi-exposure ones in terms
of the well-exposedness. It is also confirmed that the proposed simple weights
provide the highest score of statistical naturalness and discrete entropy in
all fusion methods.
| null |
http://arxiv.org/abs/1805.11211v1
|
http://arxiv.org/pdf/1805.11211v1.pdf
| null |
[
"Yuma Kinoshita",
"Sayaka Shiota",
"Hitoshi Kiya"
] |
[
"Multi-Exposure Image Fusion"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/putting-a-bug-in-ml-the-moth-olfactory
|
1802.05405
| null | null |
Putting a bug in ML: The moth olfactory network learns to read MNIST
|
We seek to (i) characterize the learning architectures exploited in
biological neural networks for training on very few samples, and (ii) port
these algorithmic structures to a machine learning context. The Moth Olfactory
Network is among the simplest biological neural systems that can learn, and its
architecture includes key structural elements and mechanisms widespread in
biological neural nets, such as cascaded networks, competitive inhibition, high
intrinsic noise, sparsity, reward mechanisms, and Hebbian plasticity. These
structural biological elements, in combination, enable rapid learning.
MothNet is a computational model of the Moth Olfactory Network, closely
aligned with the moth's known biophysics and with in vivo electrode data
collected from moths learning new odors. We assign this model the task of
learning to read the MNIST digits. We show that MothNet successfully learns to
read given very few training samples (1 to 10 samples per class). In this
few-samples regime, it outperforms standard machine learning methods such as
nearest-neighbors, support-vector machines, and neural networks (NNs), and
matches specialized one-shot transfer-learning methods but without the need for
pre-training. The MothNet architecture illustrates how algorithmic structures
derived from biological brains can be used to build alternative NNs that may
avoid some of the learning rate limitations of current engineered NNs.
|
The Moth Olfactory Network is among the simplest biological neural systems that can learn, and its architecture includes key structural elements and mechanisms widespread in biological neural nets, such as cascaded networks, competitive inhibition, high intrinsic noise, sparsity, reward mechanisms, and Hebbian plasticity.
|
http://arxiv.org/abs/1802.05405v3
|
http://arxiv.org/pdf/1802.05405v3.pdf
| null |
[
"Charles B. Delahunt",
"J. Nathan Kutz"
] |
[
"BIG-bench Machine Learning",
"Transfer Learning"
] | 2018-02-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-statistical-recurrent-model-on-the-manifold
|
1805.11204
| null | null |
A Statistical Recurrent Model on the Manifold of Symmetric Positive Definite Matrices
|
In a number of disciplines, the data (e.g., graphs, manifolds) to be analyzed
are non-Euclidean in nature. Geometric deep learning corresponds to techniques
that generalize deep neural network models to such non-Euclidean spaces.
Several recent papers have shown how convolutional neural networks (CNNs) can
be extended to learn with graph-based data. In this work, we study the setting
where the data (or measurements) are ordered, longitudinal or temporal in
nature and live on a Riemannian manifold -- this setting is common in a variety
of problems in statistical machine learning, vision and medical imaging. We
show how recurrent statistical recurrent network models can be defined in such
spaces. We give an efficient algorithm and conduct a rigorous analysis of its
statistical properties. We perform extensive numerical experiments
demonstrating competitive performance with state of the art methods but with
significantly less number of parameters. We also show applications to a
statistical analysis task in brain imaging, a regime where deep neural network
models have only been utilized in limited ways.
|
We show how recurrent statistical recurrent network models can be defined in such spaces.
|
http://arxiv.org/abs/1805.11204v2
|
http://arxiv.org/pdf/1805.11204v2.pdf
|
NeurIPS 2018 12
|
[
"Rudrasis Chakraborty",
"Chun-Hao Yang",
"Xingjian Zhen",
"Monami Banerjee",
"Derek Archer",
"David Vaillancourt",
"Vikas Singh",
"Baba C. Vemuri"
] |
[] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8104-a-statistical-recurrent-model-on-the-manifold-of-symmetric-positive-definite-matrices
|
http://papers.nips.cc/paper/8104-a-statistical-recurrent-model-on-the-manifold-of-symmetric-positive-definite-matrices.pdf
|
a-statistical-recurrent-model-on-the-manifold-1
| null |
[] |
https://paperswithcode.com/paper/leveraged-volume-sampling-for-linear
|
1802.06749
| null | null |
Leveraged volume sampling for linear regression
|
Suppose an $n \times d$ design matrix in a linear regression problem is
given, but the response for each point is hidden unless explicitly requested.
The goal is to sample only a small number $k \ll n$ of the responses, and then
produce a weight vector whose sum of squares loss over all points is at most
$1+\epsilon$ times the minimum. When $k$ is very small (e.g., $k=d$), jointly
sampling diverse subsets of points is crucial. One such method called volume
sampling has a unique and desirable property that the weight vector it produces
is an unbiased estimate of the optimum. It is therefore natural to ask if this
method offers the optimal unbiased estimate in terms of the number of responses
$k$ needed to achieve a $1+\epsilon$ loss approximation.
Surprisingly we show that volume sampling can have poor behavior when we
require a very accurate approximation -- indeed worse than some i.i.d. sampling
techniques whose estimates are biased, such as leverage score sampling. We then
develop a new rescaled variant of volume sampling that produces an unbiased
estimate which avoids this bad behavior and has at least as good a tail bound
as leverage score sampling: sample size $k=O(d\log d + d/\epsilon)$ suffices to
guarantee total loss at most $1+\epsilon$ times the minimum with high
probability. Thus, we improve on the best previously known sample size for an
unbiased estimator, $k=O(d^2/\epsilon)$.
Our rescaling procedure leads to a new efficient algorithm for volume
sampling which is based on a determinantal rejection sampling technique with
potentially broader applications to determinantal point processes. Other
contributions include introducing the combinatorics needed for rescaled volume
sampling and developing tail bounds for sums of dependent random matrices which
arise in the process.
| null |
http://arxiv.org/abs/1802.06749v3
|
http://arxiv.org/pdf/1802.06749v3.pdf
|
NeurIPS 2018 12
|
[
"Michał Dereziński",
"Manfred K. Warmuth",
"Daniel Hsu"
] |
[
"Point Processes",
"regression"
] | 2018-02-19T00:00:00 |
http://papers.nips.cc/paper/7517-leveraged-volume-sampling-for-linear-regression
|
http://papers.nips.cc/paper/7517-leveraged-volume-sampling-for-linear-regression.pdf
|
leveraged-volume-sampling-for-linear-1
| null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/fairgan-fairness-aware-generative-adversarial
|
1805.11202
| null | null |
FairGAN: Fairness-aware Generative Adversarial Networks
|
Fairness-aware learning is increasingly important in data mining.
Discrimination prevention aims to prevent discrimination in the training data
before it is used to conduct predictive analysis. In this paper, we focus on
fair data generation that ensures the generated data is discrimination free.
Inspired by generative adversarial networks (GAN), we present fairness-aware
generative adversarial networks, called FairGAN, which are able to learn a
generator producing fair data and also preserving good data utility. Compared
with the naive fair data generation models, FairGAN further ensures the
classifiers which are trained on generated data can achieve fair classification
on real data. Experiments on a real dataset show the effectiveness of FairGAN.
| null |
http://arxiv.org/abs/1805.11202v1
|
http://arxiv.org/pdf/1805.11202v1.pdf
| null |
[
"Depeng Xu",
"Shuhan Yuan",
"Lu Zhang",
"Xintao Wu"
] |
[
"Fairness",
"General Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bayesian-coreset-construction-via-greedy
|
1802.01737
| null | null |
Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent
|
Coherent uncertainty quantification is a key strength of Bayesian methods.
But modern algorithms for approximate Bayesian posterior inference often
sacrifice accurate posterior uncertainty estimation in the pursuit of
scalability. This work shows that previous Bayesian coreset construction
algorithms---which build a small, weighted subset of the data that approximates
the full dataset---are no exception. We demonstrate that these algorithms scale
the coreset log-likelihood suboptimally, resulting in underestimated posterior
uncertainty. To address this shortcoming, we develop greedy iterative geodesic
ascent (GIGA), a novel algorithm for Bayesian coreset construction that scales
the coreset log-likelihood optimally. GIGA provides geometric decay in
posterior approximation error as a function of coreset size, and maintains the
fast running time of its predecessors. The paper concludes with validation of
GIGA on both synthetic and real datasets, demonstrating that it reduces
posterior approximation error by orders of magnitude compared with previous
coreset constructions.
|
Coherent uncertainty quantification is a key strength of Bayesian methods.
|
http://arxiv.org/abs/1802.01737v2
|
http://arxiv.org/pdf/1802.01737v2.pdf
|
ICML 2018 7
|
[
"Trevor Campbell",
"Tamara Broderick"
] |
[
"Uncertainty Quantification"
] | 2018-02-05T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1950
|
http://proceedings.mlr.press/v80/campbell18a/campbell18a.pdf
|
bayesian-coreset-construction-via-greedy-1
| null |
[] |
https://paperswithcode.com/paper/a-parallel-implementation-of-the-covariance
|
1805.11201
| null | null |
A parallel implementation of the covariance matrix adaptation evolution strategy
|
In many practical optimization problems, the derivatives of the functions to
be optimized are unavailable or unreliable. Such optimization problems are
solved using derivative-free optimization techniques. One of the
state-of-the-art techniques for derivative-free optimization is the covariance
matrix adaptation evolution strategy (CMA-ES) algorithm. However, the
complexity of CMA-ES algorithm makes it undesirable for tasks where fast
optimization is needed. To reduce the execution time of CMA-ES, a parallel
implementation is proposed, and its performance is analyzed using the benchmark
problems in PythOPT optimization environment.
| null |
http://arxiv.org/abs/1805.11201v1
|
http://arxiv.org/pdf/1805.11201v1.pdf
| null |
[
"Najeeb Khan"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/value-propagation-networks
|
1805.11199
| null |
SJG6G2RqtX
|
Value Propagation Networks
|
We present Value Propagation (VProp), a set of parameter-efficient
differentiable planning modules built on Value Iteration which can successfully
be trained using reinforcement learning to solve unseen tasks, has the
capability to generalize to larger map sizes, and can learn to navigate in
dynamic environments. We show that the modules enable learning to plan when the
environment also includes stochastic elements, providing a cost-efficient
learning system to build low-level size-invariant planners for a variety of
interactive navigation problems. We evaluate on static and dynamic
configurations of MazeBase grid-worlds, with randomly generated environments of
several different sizes, and on a StarCraft navigation scenario, with more
complex dynamics, and pixels as input.
| null |
http://arxiv.org/abs/1805.11199v2
|
http://arxiv.org/pdf/1805.11199v2.pdf
|
ICLR 2018 1
|
[
"Nantas Nardelli",
"Gabriel Synnaeve",
"Zeming Lin",
"Pushmeet Kohli",
"Philip H. S. Torr",
"Nicolas Usunier"
] |
[
"Navigate",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Starcraft"
] | 2018-05-28T00:00:00 |
https://openreview.net/forum?id=SJG6G2RqtX
|
https://openreview.net/pdf?id=SJG6G2RqtX
|
value-propagation-networks-1
| null |
[] |
https://paperswithcode.com/paper/lifelong-generative-modeling
|
1705.09847
| null |
S1fduCl0b
|
Lifelong Generative Modeling
|
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to unsupervised generative modeling, where we continuously incorporate newly observed distributions into a learned model. We do so through a student-teacher Variational Autoencoder architecture which allows us to learn and preserve all the distributions seen so far, without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, inspired by a Bayesian update rule, the student model leverages the information learned by the teacher, which acts as a probabilistic knowledge store. The regularizer reduces the effect of catastrophic interference that appears when we learn over sequences of distributions. We validate our model's performance on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A and demonstrate that our model mitigates the effects of catastrophic interference faced by neural networks in sequential learning scenarios.
|
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner.
|
https://arxiv.org/abs/1705.09847v7
|
https://arxiv.org/pdf/1705.09847v7.pdf
|
ICLR 2018 1
|
[
"Jason Ramapuram",
"Magda Gregorova",
"Alexandros Kalousis"
] |
[
"Lifelong learning",
"Transfer Learning"
] | 2017-05-27T00:00:00 |
https://openreview.net/forum?id=S1fduCl0b
|
https://openreview.net/pdf?id=S1fduCl0b
|
lifelong-generative-modeling-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/capsnet-comparative-performance-evaluation
|
1805.11195
| null | null |
CapsNet comparative performance evaluation for image classification
|
Image classification has become one of the main tasks in the field of
computer vision technologies. In this context, a recent algorithm called
CapsNet that implements an approach based on activity vectors and dynamic
routing between capsules may overcome some of the limitations of the current
state of the art artificial neural networks (ANN) classifiers, such as
convolutional neural networks (CNN). In this paper, we evaluated the
performance of the CapsNet algorithm in comparison with three well-known
classifiers (Fisher-faces, LeNet, and ResNet). We tested the classification
accuracy on four datasets with a different number of instances and classes,
including images of faces, traffic signs, and everyday objects. The evaluation
results show that even for simple architectures, training the CapsNet algorithm
requires significant computational resources and its classification performance
falls below the average accuracy values of the other three classifiers.
However, we argue that CapsNet seems to be a promising new technique for image
classification, and further experiments using more robust computation resources
and re-fined CapsNet architectures may produce better outcomes.
| null |
http://arxiv.org/abs/1805.11195v1
|
http://arxiv.org/pdf/1805.11195v1.pdf
| null |
[
"Rinat Mukhometzianov",
"Juan Carrillo"
] |
[
"Classification",
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/Elman295/Paper_with_code/blob/main/LeNet_5_Pytorch.ipynb",
"description": "**LeNet** is a classic convolutional neural network employing the use of convolutions, pooling and fully connected layers. It was used for the handwritten digit recognition task with the MNIST dataset. The architectural design served as inspiration for future networks such as [AlexNet](https://paperswithcode.com/method/alexnet) and [VGG](https://paperswithcode.com/method/vgg)..\r\n\r\n[code](https://github.com/Elman295/Paper_with_code/blob/main/LeNet_5_Pytorch.ipynb)",
"full_name": "LeNet",
"introduced_year": 1998,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "LeNet",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/tadam-task-dependent-adaptive-metric-for
|
1805.10123
| null | null |
TADAM: Task dependent adaptive metric for improved few-shot learning
|
Few-shot learning has become essential for producing models that generalize
from few examples. In this work, we identify that metric scaling and metric
task conditioning are important to improve the performance of few-shot
algorithms. Our analysis reveals that simple metric scaling completely changes
the nature of few-shot algorithm parameter updates. Metric scaling provides
improvements up to 14% in accuracy for certain metrics on the mini-Imagenet
5-way 5-shot classification task. We further propose a simple and effective way
of conditioning a learner on the task sample set, resulting in learning a
task-dependent metric space. Moreover, we propose and empirically test a
practical end-to-end optimization procedure based on auxiliary task co-training
to learn a task-dependent metric space. The resulting few-shot learning model
based on the task-dependent scaled metric achieves state of the art on
mini-Imagenet. We confirm these results on another few-shot dataset that we
introduce in this paper based on CIFAR100. Our code is publicly available at
https://github.com/ElementAI/TADAM.
|
We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space.
|
http://arxiv.org/abs/1805.10123v4
|
http://arxiv.org/pdf/1805.10123v4.pdf
|
NeurIPS 2018 12
|
[
"Boris N. Oreshkin",
"Pau Rodriguez",
"Alexandre Lacoste"
] |
[
"Few-Shot Image Classification",
"Few-Shot Learning"
] | 2018-05-23T00:00:00 |
http://papers.nips.cc/paper/7352-tadam-task-dependent-adaptive-metric-for-improved-few-shot-learning
|
http://papers.nips.cc/paper/7352-tadam-task-dependent-adaptive-metric-for-improved-few-shot-learning.pdf
|
tadam-task-dependent-adaptive-metric-for-1
| null |
[] |
https://paperswithcode.com/paper/learning-from-less-data-diversified-subset
|
1805.11191
| null | null |
Learning From Less Data: Diversified Subset Selection and Active Learning in Image Classification Tasks
|
Supervised machine learning based state-of-the-art computer vision techniques
are in general data hungry and pose the challenges of not having adequate
computing resources and of high costs involved in human labeling efforts.
Training data subset selection and active learning techniques have been
proposed as possible solutions to these challenges respectively. A special
class of subset selection functions naturally model notions of diversity,
coverage and representation and they can be used to eliminate redundancy and
thus lend themselves well for training data subset selection. They can also
help improve the efficiency of active learning in further reducing human
labeling efforts by selecting a subset of the examples obtained using the
conventional uncertainty sampling based techniques. In this work we empirically
demonstrate the effectiveness of two diversity models, namely the
Facility-Location and Disparity-Min models for training-data subset selection
and reducing labeling effort. We do this for a variety of computer vision tasks
including Gender Recognition, Scene Recognition and Object Recognition. Our
results show that subset selection done in the right way can add 2-3% in
accuracy on existing baselines, particularly in the case of less training data.
This allows the training of complex machine learning models (like Convolutional
Neural Networks) with much less training data while incurring minimal
performance loss.
| null |
http://arxiv.org/abs/1805.11191v1
|
http://arxiv.org/pdf/1805.11191v1.pdf
| null |
[
"Vishal Kaushal",
"Anurag Sahoo",
"Khoshrav Doctor",
"Narasimha Raju",
"Suyash Shetty",
"Pankaj Singh",
"Rishabh Iyer",
"Ganesh Ramakrishnan"
] |
[
"Active Learning",
"BIG-bench Machine Learning",
"Diversity",
"General Classification",
"image-classification",
"Image Classification",
"Object Recognition",
"Scene Recognition"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/graph-based-filtering-of-out-of-vocabulary
|
1805.11189
| null | null |
Graph-based Filtering of Out-of-Vocabulary Words for Encoder-Decoder Models
|
Encoder-decoder models typically only employ words that are frequently used
in the training corpus to reduce the computational costs and exclude noise.
However, this vocabulary set may still include words that interfere with
learning in encoder-decoder models. This paper proposes a method for selecting
more suitable words for learning encoders by utilizing not only frequency, but
also co-occurrence information, which we capture using the HITS algorithm. We
apply our proposed method to two tasks: machine translation and grammatical
error correction. For Japanese-to-English translation, this method achieves a
BLEU score that is 0.56 points more than that of a baseline. It also
outperforms the baseline method for English grammatical error correction, with
an F0.5-measure that is 1.48 points higher.
|
For Japanese-to-English translation, this method achieves a BLEU score that is 0. 56 points more than that of a baseline.
|
http://arxiv.org/abs/1805.11189v1
|
http://arxiv.org/pdf/1805.11189v1.pdf
|
ACL 2018 7
|
[
"Satoru Katsumata",
"Yukio Matsumura",
"Hayahide Yamagishi",
"Mamoru Komachi"
] |
[
"Decoder",
"Grammatical Error Correction",
"Machine Translation",
"Translation"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/P18-3016
|
https://aclanthology.org/P18-3016.pdf
|
graph-based-filtering-of-out-of-vocabulary-1
| null |
[] |
https://paperswithcode.com/paper/semi-implicit-variational-inference
|
1805.11183
| null | null |
Semi-Implicit Variational Inference
|
Semi-implicit variational inference (SIVI) is introduced to expand the
commonly used analytic variational distribution family, by mixing the
variational parameter with a flexible distribution. This mixing distribution
can assume any density function, explicit or not, as long as independent random
samples can be generated via reparameterization. Not only does SIVI expand the
variational family to incorporate highly flexible variational distributions,
including implicit ones that have no analytic density functions, but also
sandwiches the evidence lower bound (ELBO) between a lower bound and an upper
bound, and further derives an asymptotically exact surrogate ELBO that is
amenable to optimization via stochastic gradient ascent. With a substantially
expanded variational family and a novel optimization algorithm, SIVI is shown
to closely match the accuracy of MCMC in inferring the posterior in a variety
of Bayesian inference tasks.
|
Semi-implicit variational inference (SIVI) is introduced to expand the commonly used analytic variational distribution family, by mixing the variational parameter with a flexible distribution.
|
http://arxiv.org/abs/1805.11183v1
|
http://arxiv.org/pdf/1805.11183v1.pdf
|
ICML 2018 7
|
[
"Mingzhang Yin",
"Mingyuan Zhou"
] |
[
"Bayesian Inference",
"Variational Inference"
] | 2018-05-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2124
|
http://proceedings.mlr.press/v80/yin18b/yin18b.pdf
|
semi-implicit-variational-inference-1
| null |
[] |
https://paperswithcode.com/paper/towards-computational-fluorescence-microscopy
|
1805.11178
| null | null |
Towards computational fluorescence microscopy: Machine learning-based integrated prediction of morphological and molecular tumor profiles
|
Recent advances in cancer research largely rely on new developments in
microscopic or molecular profiling techniques offering high level of detail
with respect to either spatial or molecular features, but usually not both.
Here, we present a novel machine learning-based computational approach that
allows for the identification of morphological tissue features and the
prediction of molecular properties from breast cancer imaging data. This
integration of microanatomic information of tumors with complex molecular
profiling data, including protein or gene expression, copy number variation,
gene methylation and somatic mutations, provides a novel means to
computationally score molecular markers with respect to their relevance to
cancer and their spatial associations within the tumor microenvironment.
| null |
http://arxiv.org/abs/1805.11178v1
|
http://arxiv.org/pdf/1805.11178v1.pdf
| null |
[
"Alexander Binder",
"Michael Bockmayr",
"Miriam Hägele",
"Stephan Wienert",
"Daniel Heim",
"Katharina Hellweg",
"Albrecht Stenzinger",
"Laura Parlow",
"Jan Budczies",
"Benjamin Goeppert",
"Denise Treue",
"Manato Kotani",
"Masaru Ishii",
"Manfred Dietel",
"Andreas Hocke",
"Carsten Denkert",
"Klaus-Robert Müller",
"Frederick Klauschen"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-cnn-for-homogneous-riemannian-manifolds
|
1805.05487
| null | null |
A CNN for homogneous Riemannian manifolds with applications to Neuroimaging
|
Convolutional neural networks are ubiquitous in Machine Learning applications
for solving a variety of problems. They however can not be used in their native
form when the domain of the data is commonly encountered manifolds such as the
sphere, the special orthogonal group, the Grassmanian, the manifold of
symmetric positive definite matrices and others. Most recently, generalization
of CNNs to data domains such as the 2-sphere has been reported by some research
groups, which is referred to as the spherical CNNs (SCNNs). The key property of
SCNNs distinct from CNNs is that they exhibit the rotational equivariance
property that allows for sharing learned weights within a layer. In this paper,
we theoretically generalize the CNNs to Riemannian homogeneous manifolds, that
include but are not limited to the aforementioned example manifolds. Our key
contributions in this work are: (i) A theorem stating that linear group
equivariance systems are fully characterized by correlation of functions on the
domain manifold and vice-versa. This is fundamental to the characterization of
all linear group equivariant systems and parallels the widely used result in
linear system theory for vector spaces. (ii) As a corrolary, we prove the
equivariance of the correlation operation to group actions admitted by the
input domains which are Riemannian homogeneous manifolds. (iii) We present the
first end-to-end deep network architecture for classification of diffusion
magnetic resonance image (dMRI) scans acquired from a cohort of 44 Parkinson
Disease patients and 50 control/normal subjects. (iv) A proof of concept
experiment involving synthetic data generated on the manifold of symmetric
positive definite matrices is presented to demonstrate the applicability of our
network to other types of domains.
| null |
http://arxiv.org/abs/1805.05487v3
|
http://arxiv.org/pdf/1805.05487v3.pdf
| null |
[
"Rudrasis Chakraborty",
"Monami Banerjee",
"Baba C. Vemuri"
] |
[] | 2018-05-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/strongly-polynomial-efficient-approximation
|
1805.11170
| null | null |
Strongly polynomial efficient approximation scheme for segmentation
|
Partitioning a sequence of length $n$ into $k$ coherent segments (Seg) is one
of the classic optimization problems. As long as the optimization criterion is
additive, Seg can be solved exactly in $O(n^2k)$ time using a classic dynamic
program. Due to the quadratic term, computing the exact segmentation may be too
expensive for long sequences, which has led to development of approximate
solutions. We consider an existing estimation scheme that computes $(1 +
\epsilon)$ approximation in polylogarithmic time. We augment this algorithm,
making it strongly polynomial. We do this by first solving a slightly different
segmentation problem (MaxSeg), where the quality of the segmentation is the
maximum penalty of an individual segment. By using this solution to initialize
the estimation scheme, we are able to obtain a strongly polynomial algorithm.
In addition, we consider a cumulative version of Seg, where we are asked to
discover the optimal segmentation for each prefix of the input sequence. We
propose a strongly polynomial algorithm that yields $(1 + \epsilon)$
approximation in $O(nk^2 / \epsilon)$ time. Finally, we consider a cumulative
version of MaxSeg, and show that we can solve the problem in $O(nk \log k)$
time.
| null |
http://arxiv.org/abs/1805.11170v2
|
http://arxiv.org/pdf/1805.11170v2.pdf
| null |
[
"Nikolaj Tatti"
] |
[
"2k",
"Segmentation"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/resilient-non-submodular-maximization-over
|
1804.01013
| null | null |
Resilient Non-Submodular Maximization over Matroid Constraints
|
The control and sensing of large-scale systems results in combinatorial
problems not only for sensor and actuator placement but also for scheduling or
observability/controllability. Such combinatorial constraints in system design
and implementation can be captured using a structure known as matroids. In
particular, the algebraic structure of matroids can be exploited to develop
scalable algorithms for sensor and actuator selection, along with quantifiable
approximation bounds. However, in large-scale systems, sensors and actuators
may fail or may be (cyber-)attacked. The objective of this paper is to focus on
resilient matroid-constrained problems arising in control and sensing but in
the presence of sensor and actuator failures. In general, resilient
matroid-constrained problems are computationally hard. Contrary to the
non-resilient case (with no failures), even though they often involve objective
functions that are monotone or submodular, no scalable approximation algorithms
are known for their solution. In this paper, we provide the first algorithm,
that also has the following properties: First, it achieves system-wide
resiliency, i.e., the algorithm is valid for any number of denial-of-service
attacks or failures. Second, it is scalable, as our algorithm terminates with
the same running time as state-of-the-art algorithms for (non-resilient)
matroid-constrained optimization. Third, it provides provable approximation
bounds on the system performance, since for monotone objective functions our
algorithm guarantees a solution close to the optimal. We quantify our
algorithm's approximation performance using a notion of curvature for monotone
(not necessarily submodular) set functions. Finally, we support our theoretical
analyses with numerical experiments, by considering a control-aware sensor
selection scenario, namely, sensing-constrained robot navigation.
| null |
http://arxiv.org/abs/1804.01013v4
|
http://arxiv.org/pdf/1804.01013v4.pdf
| null |
[
"Vasileios Tzoumas",
"Ali Jadbabaie",
"George J. Pappas"
] |
[
"Robot Navigation",
"Scheduling"
] | 2018-04-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-visual-approach-for-age-and-gender
|
1805.11166
| null | null |
A visual approach for age and gender identification on Twitter
|
The goal of Author Profiling (AP) is to identify demographic aspects (e.g.,
age, gender) from a given set of authors by analyzing their written texts.
Recently, the AP task has gained interest in many problems related to computer
forensics, psychology, marketing, but specially in those related with social
media exploitation. As known, social media data is shared through a wide range
of modalities (e.g., text, images and audio), representing valuable information
to be exploited for extracting valuable insights from users. Nevertheless, most
of the current work in AP using social media data has been devoted to analyze
textual information only, and there are very few works that have started
exploring the gender identification using visual information. Contrastingly,
this paper focuses in exploiting the visual modality to perform both age and
gender identification in social media, specifically in Twitter. Our goal is to
evaluate the pertinence of using visual information in solving the AP task.
Accordingly, we have extended the Twitter corpus from PAN 2014, incorporating
posted images from all the users, making a distinction between tweeted and
retweeted images. Performed experiments provide interesting evidence on the
usefulness of visual information in comparison with traditional textual
representations for the AP task.
| null |
http://arxiv.org/abs/1805.11166v1
|
http://arxiv.org/pdf/1805.11166v1.pdf
| null |
[
"Miguel A. Alvarez-Carmona",
"Luis Pellegrin",
"Manuel Montes-y-Gómez",
"Fernando Sánchez-Vega",
"Hugo Jair Escalante",
"A. Pastor López-Monroy",
"Luis Villaseñor-Pineda",
"Esaú Villatoro-Tello"
] |
[
"Author Profiling",
"Marketing"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/confidence-prediction-for-lexicon-free-ocr
|
1805.11161
| null | null |
Confidence Prediction for Lexicon-Free OCR
|
Having a reliable accuracy score is crucial for real world applications of
OCR, since such systems are judged by the number of false readings.
Lexicon-based OCR systems, which deal with what is essentially a multi-class
classification problem, often employ methods explicitly taking into account the
lexicon, in order to improve accuracy. However, in lexicon-free scenarios,
filtering errors requires an explicit confidence calculation. In this work we
show two explicit confidence measurement techniques, and show that they are
able to achieve a significant reduction in misreads on both standard benchmarks
and a proprietary dataset.
| null |
http://arxiv.org/abs/1805.11161v1
|
http://arxiv.org/pdf/1805.11161v1.pdf
| null |
[
"Noam Mor",
"Lior Wolf"
] |
[
"General Classification",
"Multi-class Classification",
"Optical Character Recognition (OCR)",
"Prediction"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-learning-of-artistic-styles-with
|
1805.11155
| null | null |
Unsupervised Learning of Artistic Styles with Archetypal Style Analysis
|
In this paper, we introduce an unsupervised learning approach to
automatically discover, summarize, and manipulate artistic styles from large
collections of paintings. Our method is based on archetypal analysis, which is
an unsupervised learning technique akin to sparse coding with a geometric
interpretation. When applied to deep image representations from a collection of
artworks, it learns a dictionary of archetypal styles, which can be easily
visualized. After training the model, the style of a new image, which is
characterized by local statistics of deep visual features, is approximated by a
sparse convex combination of archetypes. This enables us to interpret which
archetypal styles are present in the input image, and in which proportion.
Finally, our approach allows us to manipulate the coefficients of the latent
archetypal decomposition, and achieve various special effects such as style
enhancement, transfer, and interpolation between multiple archetypes.
| null |
http://arxiv.org/abs/1805.11155v2
|
http://arxiv.org/pdf/1805.11155v2.pdf
|
NeurIPS 2018 12
|
[
"Daan Wynen",
"Cordelia Schmid",
"Julien Mairal"
] |
[] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7893-unsupervised-learning-of-artistic-styles-with-archetypal-style-analysis
|
http://papers.nips.cc/paper/7893-unsupervised-learning-of-artistic-styles-with-archetypal-style-analysis.pdf
|
unsupervised-learning-of-artistic-styles-with-1
| null |
[] |
https://paperswithcode.com/paper/model-robust-counterfactual-prediction-method
|
1705.07019
| null | null |
Model-Robust Counterfactual Prediction Method
|
We develop a novel method for counterfactual analysis based on observational
data using prediction intervals for units under different exposures. Unlike
methods that target heterogeneous or conditional average treatment effects of
an exposure, the proposed approach aims to take into account the irreducible
dispersions of counterfactual outcomes so as to quantify the relative impact of
different exposures. The prediction intervals are constructed in a
distribution-free and model-robust manner based on the conformal prediction
approach. The computational obstacles to this approach are circumvented by
leveraging properties of a tuning-free method that learns sparse additive
predictor models for counterfactual outcomes. The method is illustrated using
both real and synthetic data.
|
We develop a novel method for counterfactual analysis based on observational data using prediction intervals for units under different exposures.
|
http://arxiv.org/abs/1705.07019v5
|
http://arxiv.org/pdf/1705.07019v5.pdf
| null |
[
"Dave Zachariah",
"Petre Stoica"
] |
[
"Conformal Prediction",
"counterfactual",
"model",
"Prediction",
"Prediction Intervals"
] | 2017-05-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unified-pragmatic-models-for-generating-and
|
1711.04987
| null | null |
Unified Pragmatic Models for Generating and Following Instructions
|
We show that explicit pragmatic inference aids in correctly generating and
following natural language instructions for complex, sequential tasks. Our
pragmatics-enabled models reason about why speakers produce certain
instructions, and about how listeners will react upon hearing them. Like
previous pragmatic models, we use learned base listener and speaker models to
build a pragmatic speaker that uses the base listener to simulate the
interpretation of candidate descriptions, and a pragmatic listener that reasons
counterfactually about alternative descriptions. We extend these models to
tasks with sequential structure. Evaluation of language generation and
interpretation shows that pragmatic inference improves state-of-the-art
listener models (at correctly interpreting human instructions) and speaker
models (at producing instructions correctly interpreted by humans) in diverse
settings.
|
We show that explicit pragmatic inference aids in correctly generating and following natural language instructions for complex, sequential tasks.
|
http://arxiv.org/abs/1711.04987v3
|
http://arxiv.org/pdf/1711.04987v3.pdf
|
NAACL 2018 6
|
[
"Daniel Fried",
"Jacob Andreas",
"Dan Klein"
] |
[
"Text Generation"
] | 2017-11-14T00:00:00 |
https://aclanthology.org/N18-1177
|
https://aclanthology.org/N18-1177.pdf
|
unified-pragmatic-models-for-generating-and-1
| null |
[] |
https://paperswithcode.com/paper/an-expectation-conditional-maximization
|
1709.06970
| null | null |
An Expectation Conditional Maximization approach for Gaussian graphical models
|
Bayesian graphical models are a useful tool for understanding dependence
relationships among many variables, particularly in situations with external
prior information. In high-dimensional settings, the space of possible graphs
becomes enormous, rendering even state-of-the-art Bayesian stochastic search
computationally infeasible. We propose a deterministic alternative to estimate
Gaussian and Gaussian copula graphical models using an Expectation Conditional
Maximization (ECM) algorithm, extending the EM approach from Bayesian variable
selection to graphical model estimation. We show that the ECM approach enables
fast posterior exploration under a sequence of mixture priors, and can
incorporate multiple sources of information.
|
Bayesian graphical models are a useful tool for understanding dependence relationships among many variables, particularly in situations with external prior information.
|
http://arxiv.org/abs/1709.06970v3
|
http://arxiv.org/pdf/1709.06970v3.pdf
| null |
[
"Zehang Richard Li",
"Tyler H. McCormick"
] |
[
"Variable Selection"
] | 2017-09-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/string-methods-for-stochastic-image-and-shape
|
1805.06038
| null | null |
String Methods for Stochastic Image and Shape Matching
|
Matching of images and analysis of shape differences is traditionally pursued
by energy minimization of paths of deformations acting to match the shape
objects. In the Large Deformation Diffeomorphic Metric Mapping (LDDMM)
framework, iterative gradient descents on the matching functional lead to
matching algorithms informally known as Beg algorithms. When stochasticity is
introduced to model stochastic variability of shapes and to provide more
realistic models of observed shape data, the corresponding matching problem can
be solved with a stochastic Beg algorithm, similar to the finite temperature
string method used in rare event sampling. In this paper, we apply a stochastic
model compatible with the geometry of the LDDMM framework to obtain a
stochastic model of images and we derive the stochastic version of the Beg
algorithm which we compare with the string method and an
expectation-maximization optimization of posterior likelihoods. The algorithm
and its use for statistical inference is tested on stochastic LDDMM landmarks
and images.
| null |
http://arxiv.org/abs/1805.06038v3
|
http://arxiv.org/pdf/1805.06038v3.pdf
| null |
[
"Alexis Arnaudon",
"Darryl Holm",
"Stefan Sommer"
] |
[] | 2018-05-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-learning-of-word-sequence
|
1606.03153
| null | null |
Unsupervised Learning of Word-Sequence Representations from Scratch via Convolutional Tensor Decomposition
|
Unsupervised text embeddings extraction is crucial for text understanding in
machine learning. Word2Vec and its variants have received substantial success
in mapping words with similar syntactic or semantic meaning to vectors close to
each other. However, extracting context-aware word-sequence embedding remains a
challenging task. Training over large corpus is difficult as labels are
difficult to get. More importantly, it is challenging for pre-trained models to
obtain word-sequence embeddings that are universally good for all downstream
tasks or for any new datasets. We propose a two-phased ConvDic+DeconvDec
framework to solve the problem by combining a word-sequence dictionary learning
model with a word-sequence embedding decode model. We propose a convolutional
tensor decomposition mechanism to learn good word-sequence phrase dictionary in
the learning phase. It is proved to be more accurate and much more efficient
than the popular alternating minimization method. In the decode phase, we
introduce a deconvolution framework that is immune to the problem of varying
sentence lengths. The word-sequence embeddings we extracted using
ConvDic+DeconvDec are universally good for a few downstream tasks we test on.
The framework requires neither pre-training nor prior/outside information.
| null |
http://arxiv.org/abs/1606.03153v3
|
http://arxiv.org/pdf/1606.03153v3.pdf
| null |
[
"Furong Huang",
"Animashree Anandkumar"
] |
[
"Dictionary Learning",
"Sentence",
"Tensor Decomposition"
] | 2016-06-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/core-conflictual-relationship-text-mining-to
|
1805.11140
| null | null |
Core Conflictual Relationship: Text Mining to Discover What and When
|
Following detailed presentation of the Core Conflictual Relationship Theme
(CCRT), there is the objective of relevant methods for what has been described
as verbalization and visualization of data. Such is also termed data mining and
text mining, and knowledge discovery in data. The Correspondence Analysis
methodology, also termed Geometric Data Analysis, is shown in a case study to
be comprehensive and revealing. Computational efficiency depends on how the
analysis process is structured. For both illustrative and revealing aspects of
the case study here, relatively extensive dream reports are used. This
Geometric Data Analysis confirms the validity of CCRT method.
| null |
http://arxiv.org/abs/1805.11140v1
|
http://arxiv.org/pdf/1805.11140v1.pdf
| null |
[
"Fionn Murtagh",
"Giuseppe Iurato"
] |
[
"Computational Efficiency"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/prioritizing-network-communities
|
1805.02411
| null | null |
Prioritizing network communities
|
Uncovering modular structure in networks is fundamental for systems in
biology, physics, and engineering. Community detection identifies candidate
modules as hypotheses, which then need to be validated through experiments,
such as mutagenesis in a biological laboratory. Only a few communities can
typically be validated, and it is thus important to prioritize which
communities to select for downstream experimentation. Here we develop CRank, a
mathematically principled approach for prioritizing network communities. CRank
efficiently evaluates robustness and magnitude of structural features of each
community and then combines these features into the community prioritization.
CRank can be used with any community detection method. It needs only
information provided by the network structure and does not require any
additional metadata or labels. However, when available, CRank can incorporate
domain-specific information to further boost performance. Experiments on many
large networks show that CRank effectively prioritizes communities, yielding a
nearly 50-fold improvement in community prioritization.
| null |
http://arxiv.org/abs/1805.02411v2
|
http://arxiv.org/pdf/1805.02411v2.pdf
| null |
[
"Marinka Zitnik",
"Rok Sosic",
"Jure Leskovec"
] |
[
"Community Detection"
] | 2018-05-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/object-counting-with-small-datasets-of-large
|
1805.11123
| null | null |
Global Sum Pooling: A Generalization Trick for Object Counting with Small Datasets of Large Images
|
In this paper, we explore the problem of training one-look regression models for counting objects in datasets comprising a small number of high-resolution, variable-shaped images. We illustrate that conventional global average pooling (GAP) based models are unreliable due to the patchwise cancellation of true overestimates and underestimates for patchwise inference. To overcome this limitation and reduce overfitting caused by the training on full-resolution images, we propose to employ global sum pooling (GSP) instead of GAP or fully connected (FC) layers at the backend of a convolutional network. Although computationally equivalent to GAP, we show through comprehensive experimentation that GSP allows convolutional networks to learn the counting task as a simple linear mapping problem generalized over the input shape and the number of objects present. This generalization capability allows GSP to avoid both patchwise cancellation and overfitting by training on small patches and inference on full-resolution images as a whole. We evaluate our approach on four different aerial image datasets - two car counting datasets (CARPK and COWC), one crowd counting dataset (ShanghaiTech; parts A and B) and one new challenging dataset for wheat spike counting. Our GSP models improve upon the state-of-the-art approaches on all four datasets with a simple architecture. Also, GSP architectures trained with smaller-sized image patches exhibit better localization property due to their focus on learning from smaller regions while training.
| null |
https://arxiv.org/abs/1805.11123v2
|
https://arxiv.org/pdf/1805.11123v2.pdf
| null |
[
"Shubhra Aich",
"Ian Stavness"
] |
[
"Crowd Counting",
"Object Counting"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/a-crowd-annotated-spanish-corpus-for-humor
|
1710.00477
| null | null |
A Crowd-Annotated Spanish Corpus for Humor Analysis
|
Computational Humor involves several tasks, such as humor recognition, humor
generation, and humor scoring, for which it is useful to have human-curated
data. In this work we present a corpus of 27,000 tweets written in Spanish and
crowd-annotated by their humor value and funniness score, with about four
annotations per tweet, tagged by 1,300 people over the Internet. It is equally
divided between tweets coming from humorous and non-humorous accounts. The
inter-annotator agreement Krippendorff's alpha value is 0.5710. The dataset is
available for general use and can serve as a basis for humor detection and as a
first step to tackle subjectivity.
|
Computational Humor involves several tasks, such as humor recognition, humor generation, and humor scoring, for which it is useful to have human-curated data.
|
http://arxiv.org/abs/1710.00477v4
|
http://arxiv.org/pdf/1710.00477v4.pdf
|
WS 2018 7
|
[
"Santiago Castro",
"Luis Chiruzzo",
"Aiala Rosá",
"Diego Garat",
"Guillermo Moncecchi"
] |
[
"Humor Detection"
] | 2017-10-02T00:00:00 |
https://aclanthology.org/W18-3502
|
https://aclanthology.org/W18-3502.pdf
|
a-crowd-annotated-spanish-corpus-for-humor-1
| null |
[] |
https://paperswithcode.com/paper/more-than-a-feeling-learning-to-grasp-and
|
1805.11085
| null | null |
More Than a Feeling: Learning to Grasp and Regrasp using Vision and Touch
|
For humans, the process of grasping an object relies heavily on rich tactile
feedback. Most recent robotic grasping work, however, has been based only on
visual input, and thus cannot easily benefit from feedback after initiating
contact. In this paper, we investigate how a robot can learn to use tactile
information to iteratively and efficiently adjust its grasp. To this end, we
propose an end-to-end action-conditional model that learns regrasping policies
from raw visuo-tactile data. This model -- a deep, multimodal convolutional
network -- predicts the outcome of a candidate grasp adjustment, and then
executes a grasp by iteratively selecting the most promising actions. Our
approach requires neither calibration of the tactile sensors, nor any
analytical modeling of contact forces, thus reducing the engineering effort
required to obtain efficient grasping policies. We train our model with data
from about 6,450 grasping trials on a two-finger gripper equipped with GelSight
high-resolution tactile sensors on each finger. Across extensive experiments,
our approach outperforms a variety of baselines at (i) estimating grasp
adjustment outcomes, (ii) selecting efficient grasp adjustments for quick
grasping, and (iii) reducing the amount of force applied at the fingers, while
maintaining competitive performance. Finally, we study the choices made by our
model and show that it has successfully acquired useful and interpretable
grasping behaviors.
| null |
http://arxiv.org/abs/1805.11085v2
|
http://arxiv.org/pdf/1805.11085v2.pdf
| null |
[
"Roberto Calandra",
"Andrew Owens",
"Dinesh Jayaraman",
"Justin Lin",
"Wenzhen Yuan",
"Jitendra Malik",
"Edward H. Adelson",
"Sergey Levine"
] |
[
"Robotic Grasping"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-abstractive-summarization-with-reinforce
|
1805.11080
| null | null |
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting
|
Inspired by how humans summarize long documents, we propose an accurate and
fast summarization model that first selects salient sentences and then rewrites
them abstractively (i.e., compresses and paraphrases) to generate a concise
overall summary. We use a novel sentence-level policy gradient method to bridge
the non-differentiable computation between these two neural networks in a
hierarchical way, while maintaining language fluency. Empirically, we achieve
the new state-of-the-art on all metrics (including human evaluation) on the
CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores.
Moreover, by first operating at the sentence-level and then the word-level, we
enable parallel decoding of our neural generative model that results in
substantially faster (10-20x) inference speed as well as 4x faster training
convergence than previous long-paragraph encoder-decoder models. We also
demonstrate the generalization of our model on the test-only DUC-2002 dataset,
where we achieve higher scores than a state-of-the-art model.
|
Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i. e., compresses and paraphrases) to generate a concise overall summary.
|
http://arxiv.org/abs/1805.11080v1
|
http://arxiv.org/pdf/1805.11080v1.pdf
|
ACL 2018 7
|
[
"Yen-Chun Chen",
"Mohit Bansal"
] |
[
"Abstractive Text Summarization",
"Decoder",
"Sentence",
"Sentence ReWriting",
"Text Summarization"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/P18-1063
|
https://aclanthology.org/P18-1063.pdf
|
fast-abstractive-summarization-with-reinforce-1
| null |
[
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/training-dnns-with-hybrid-block-floating
|
1804.01526
| null | null |
Training DNNs with Hybrid Block Floating Point
|
The wide adoption of DNNs has given birth to unrelenting computing
requirements, forcing datacenter operators to adopt domain-specific
accelerators to train them. These accelerators typically employ densely packed
full precision floating-point arithmetic to maximize performance per area.
Ongoing research efforts seek to further increase that performance density by
replacing floating-point with fixed-point arithmetic. However, a significant
roadblock for these attempts has been fixed point's narrow dynamic range, which
is insufficient for DNN training convergence. We identify block floating point
(BFP) as a promising alternative representation since it exhibits wide dynamic
range and enables the majority of DNN operations to be performed with
fixed-point logic. Unfortunately, BFP alone introduces several limitations that
preclude its direct applicability. In this work, we introduce HBFP, a hybrid
BFP-FP approach, which performs all dot products in BFP and other operations in
floating point. HBFP delivers the best of both worlds: the high accuracy of
floating point at the superior hardware density of fixed point. For a wide
variety of models, we show that HBFP matches floating point's accuracy while
enabling hardware implementations that deliver up to 8.5x higher throughput.
| null |
http://arxiv.org/abs/1804.01526v4
|
http://arxiv.org/pdf/1804.01526v4.pdf
|
NeurIPS 2018 12
|
[
"Mario Drumond",
"Tao Lin",
"Martin Jaggi",
"Babak Falsafi"
] |
[] | 2018-04-04T00:00:00 |
http://papers.nips.cc/paper/7327-training-dnns-with-hybrid-block-floating-point
|
http://papers.nips.cc/paper/7327-training-dnns-with-hybrid-block-floating-point.pdf
|
training-dnns-with-hybrid-block-floating-1
| null |
[] |
https://paperswithcode.com/paper/dataflow-matrix-machines-as-a-generalization
|
1603.09002
| null | null |
Dataflow Matrix Machines as a Generalization of Recurrent Neural Networks
|
Dataflow matrix machines are a powerful generalization of recurrent neural
networks. They work with multiple types of arbitrary linear streams, multiple
types of powerful neurons, and allow to incorporate higher-order constructions.
We expect them to be useful in machine learning and probabilistic programming,
and in the synthesis of dynamic systems and of deterministic and probabilistic
programs.
|
Dataflow matrix machines are a powerful generalization of recurrent neural networks.
|
http://arxiv.org/abs/1603.09002v2
|
http://arxiv.org/pdf/1603.09002v2.pdf
| null |
[
"Michael Bukatin",
"Steve Matthews",
"Andrey Radul"
] |
[
"BIG-bench Machine Learning",
"Probabilistic Programming"
] | 2016-03-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/reward-constrained-policy-optimization
|
1805.11074
| null |
SkfrvsA9FX
|
Reward Constrained Policy Optimization
|
Solving tasks in Reinforcement Learning is no easy feat. As the goal of the
agent is to maximize the accumulated reward, it often learns to exploit
loopholes and misspecifications in the reward signal resulting in unwanted
behavior. While constraints may solve this issue, there is no closed form
solution for general constraints. In this work we present a novel
multi-timescale approach for constrained policy optimization, called `Reward
Constrained Policy Optimization' (RCPO), which uses an alternative penalty
signal to guide the policy towards a constraint satisfying one. We prove the
convergence of our approach and provide empirical evidence of its ability to
train constraint satisfying policies.
|
Solving tasks in Reinforcement Learning is no easy feat.
|
http://arxiv.org/abs/1805.11074v3
|
http://arxiv.org/pdf/1805.11074v3.pdf
|
ICLR 2019 5
|
[
"Chen Tessler",
"Daniel J. Mankowitz",
"Shie Mannor"
] |
[
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Safe Reinforcement Learning"
] | 2018-05-28T00:00:00 |
https://openreview.net/forum?id=SkfrvsA9FX
|
https://openreview.net/pdf?id=SkfrvsA9FX
|
reward-constrained-policy-optimization-1
| null |
[] |
https://paperswithcode.com/paper/non-bifurcating-phylogenetic-tree-inference
|
1805.11073
| null | null |
Non-bifurcating phylogenetic tree inference via the adaptive LASSO
|
Phylogenetic tree inference using deep DNA sequencing is reshaping our understanding of rapidly evolving systems, such as the within-host battle between viruses and the immune system. Densely sampled phylogenetic trees can contain special features, including "sampled ancestors" in which we sequence a genotype along with its direct descendants, and "polytomies" in which multiple descendants arise simultaneously. These features are apparent after identifying zero-length branches in the tree. However, current maximum-likelihood based approaches are not capable of revealing such zero-length branches. In this paper, we find these zero-length branches by introducing adaptive-LASSO-type regularization estimators to phylogenetics, deriving their properties, and showing regularization to be a practically useful approach for phylogenetics.
|
Phylogenetic tree inference using deep DNA sequencing is reshaping our understanding of rapidly evolving systems, such as the within-host battle between viruses and the immune system.
|
https://arxiv.org/abs/1805.11073v2
|
https://arxiv.org/pdf/1805.11073v2.pdf
| null |
[
"Cheng Zhang",
"Vu Dinh",
"Frederick A. Matsen IV"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/blockcnn-a-deep-network-for-artifact-removal
|
1805.11091
| null | null |
BlockCNN: A Deep Network for Artifact Removal and Image Compression
|
We present a general technique that performs both artifact removal and image
compression. For artifact removal, we input a JPEG image and try to remove its
compression artifacts. For compression, we input an image and process its 8 by
8 blocks in a sequence. For each block, we first try to predict its intensities
based on previous blocks; then, we store a residual with respect to the input
image. Our technique reuses JPEG's legacy compression and decompression
routines. Both our artifact removal and our image compression techniques use
the same deep network, but with different training weights. Our technique is
simple and fast and it significantly improves the performance of artifact
removal and image compression.
|
For artifact removal, we input a JPEG image and try to remove its compression artifacts.
|
http://arxiv.org/abs/1805.11091v1
|
http://arxiv.org/pdf/1805.11091v1.pdf
| null |
[
"Danial Maleki",
"Soheila Nadalian",
"Mohammad Mahdi Derakhshani",
"Mohammad Amin Sadeghi"
] |
[
"Image Compression"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/state-denoised-recurrent-neural-networks
|
1805.08394
| null |
HJgyAoRqFQ
|
State-Denoised Recurrent Neural Networks
|
Recurrent neural networks (RNNs) are difficult to train on sequence
processing tasks, not only because input noise may be amplified through
feedback, but also because any inaccuracy in the weights has similar
consequences as input noise. We describe a method for denoising the hidden
state during training to achieve more robust representations thereby improving
generalization performance. Attractor dynamics are incorporated into the hidden
state to `clean up' representations at each step of a sequence. The attractor
dynamics are trained through an auxillary denoising loss to recover previously
experienced hidden states from noisy versions of those states. This
state-denoised recurrent neural network {SDRNN} performs multiple steps of
internal processing for each external sequence step. On a range of tasks, we
show that the SDRNN outperforms a generic RNN as well as a variant of the SDRNN
with attractor dynamics on the hidden state but without the auxillary loss. We
argue that attractor dynamics---and corresponding connectivity
constraints---are an essential component of the deep learning arsenal and
should be invoked not only for recurrent networks but also for improving deep
feedforward nets and intertask transfer.
| null |
http://arxiv.org/abs/1805.08394v2
|
http://arxiv.org/pdf/1805.08394v2.pdf
|
ICLR 2019 5
|
[
"Michael C. Mozer",
"Denis Kazakov",
"Robert V. Lindsey"
] |
[
"Denoising"
] | 2018-05-22T00:00:00 |
https://openreview.net/forum?id=HJgyAoRqFQ
|
https://openreview.net/pdf?id=HJgyAoRqFQ
|
state-denoised-recurrent-neural-networks-1
| null |
[] |
https://paperswithcode.com/paper/theory-and-experiments-on-vector-quantized
|
1805.11063
| null | null |
Theory and Experiments on Vector Quantized Autoencoders
|
Deep neural networks with discrete latent variables offer the promise of
better symbolic reasoning, and learning abstractions that are more useful to
new tasks. There has been a surge in interest in discrete latent variable
models, however, despite several recent improvements, the training of discrete
latent variable models has remained challenging and their performance has
mostly failed to match their continuous counterparts. Recent work on vector
quantized autoencoders (VQ-VAE) has made substantial progress in this
direction, with its perplexity almost matching that of a VAE on datasets such
as CIFAR-10. In this work, we investigate an alternate training technique for
VQ-VAE, inspired by its connection to the Expectation Maximization (EM)
algorithm. Training the discrete bottleneck with EM helps us achieve better
image generation results on CIFAR-10, and together with knowledge distillation,
allows us to develop a non-autoregressive machine translation model whose
accuracy almost matches a strong greedy autoregressive baseline Transformer,
while being 3.3 times faster at inference.
|
Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks.
|
http://arxiv.org/abs/1805.11063v2
|
http://arxiv.org/pdf/1805.11063v2.pdf
| null |
[
"Aurko Roy",
"Ashish Vaswani",
"Arvind Neelakantan",
"Niki Parmar"
] |
[
"Image Generation",
"Knowledge Distillation",
"Machine Translation",
"Translation"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.",
"full_name": "Linear Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Linear Layer",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Position-Wise Feed-Forward Layer** is a type of [feedforward layer](https://www.paperswithcode.com/method/category/feedforwad-networks) consisting of two [dense layers](https://www.paperswithcode.com/method/dense-connections) that applies to the last dimension, which means the same dense layers are used for each position item in the sequence, so called position-wise.",
"full_name": "Position-Wise Feed-Forward Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Position-Wise Feed-Forward Layer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/b7bda236d18815052378c88081f64935427d7716/torch/optim/adam.py#L6",
"description": "**Adam** is an adaptive learning rate optimization algorithm that utilises both momentum and scaling, combining the benefits of [RMSProp](https://paperswithcode.com/method/rmsprop) and [SGD w/th Momentum](https://paperswithcode.com/method/sgd-with-momentum). The optimizer is designed to be appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. \r\n\r\nThe weight updates are performed as:\r\n\r\n$$ w_{t} = w_{t-1} - \\eta\\frac{\\hat{m}\\_{t}}{\\sqrt{\\hat{v}\\_{t}} + \\epsilon} $$\r\n\r\nwith\r\n\r\n$$ \\hat{m}\\_{t} = \\frac{m_{t}}{1-\\beta^{t}_{1}} $$\r\n\r\n$$ \\hat{v}\\_{t} = \\frac{v_{t}}{1-\\beta^{t}_{2}} $$\r\n\r\n$$ m_{t} = \\beta_{1}m_{t-1} + (1-\\beta_{1})g_{t} $$\r\n\r\n$$ v_{t} = \\beta_{2}v_{t-1} + (1-\\beta_{2})g_{t}^{2} $$\r\n\r\n\r\n$ \\eta $ is the step size/learning rate, around 1e-3 in the original paper. $ \\epsilon $ is a small number, typically 1e-8 or 1e-10, to prevent dividing by zero. $ \\beta_{1} $ and $ \\beta_{2} $ are forgetting parameters, with typical values 0.9 and 0.999, respectively.",
"full_name": "Adam",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "Adam",
"source_title": "Adam: A Method for Stochastic Optimization",
"source_url": "http://arxiv.org/abs/1412.6980v9"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "",
"description": "",
"full_name": "Attention Is All You Need",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Attention",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, USD Coin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're trying to recover a lost USD Coin wallet, knowing where to get help is essential. That’s why the USD Coin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the USD Coin Customer Support Number +1-833-534-1729\r\nUSD Coin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. USD Coin Transaction Not Confirmed\r\nOne of the most common concerns is when a USD Coin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. USD Coin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A USD Coin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost USD Coin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost USD Coin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. USD Coin Deposit Not Received\r\nIf someone has sent you USD Coin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A USD Coin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. USD Coin Transaction Stuck or Pending\r\nSometimes your USD Coin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. USD Coin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word USD Coin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the USD Coin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and USD Coin tech.\r\n\r\n24/7 Availability: USD Coin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About USD Coin Support and Wallet Issues\r\nQ1: Can USD Coin support help me recover stolen BTC?\r\nA: While USD Coin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: USD Coin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not USD Coin’s official number (USD Coin is decentralized), it connects you to trained professionals experienced in resolving all major USD Coin issues.\r\n\r\nFinal Thoughts\r\nUSD Coin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a USD Coin transaction not confirmed, your USD Coin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the USD Coin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "USD Coin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "USD Coin Customer Service Number +1-833-534-1729",
"source_title": "Auto-Encoding Variational Bayes",
"source_url": "http://arxiv.org/abs/1312.6114v10"
}
] |
https://paperswithcode.com/paper/a-study-on-passage-re-ranking-in-embedding
|
1804.08057
| null | null |
A Study on Passage Re-ranking in Embedding based Unsupervised Semantic Search
|
State of the art approaches for (embedding based) unsupervised semantic
search exploits either compositional similarity (of a query and a passage) or
pair-wise word (or term) similarity (from the query and the passage). By
design, word based approaches do not incorporate similarity in the larger
context (query/passage), while compositional similarity based approaches are
usually unable to take advantage of the most important cues in the context. In
this paper we propose a new compositional similarity based approach, called
variable centroid vector (VCVB), that tries to address both of these
limitations. We also presents results using a different type of compositional
similarity based approach by exploiting universal sentence embedding. We
provide empirical evaluation on two different benchmarks.
| null |
http://arxiv.org/abs/1804.08057v4
|
http://arxiv.org/pdf/1804.08057v4.pdf
| null |
[
"Md. Faisal Mahbub Chowdhury",
"Vijil Chenthamarakshan",
"Rishav Chakravarti",
"Alfio M. Gliozzo"
] |
[
"Passage Re-Ranking",
"Re-Ranking",
"Sentence",
"Sentence Embedding",
"Sentence-Embedding"
] | 2018-04-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-generative-models-for-distribution
|
1805.11057
| null | null |
Deep Generative Models for Distribution-Preserving Lossy Compression
|
We propose and study the problem of distribution-preserving lossy
compression. Motivated by recent advances in extreme image compression which
allow to maintain artifact-free reconstructions even at very low bitrates, we
propose to optimize the rate-distortion tradeoff under the constraint that the
reconstructed samples follow the distribution of the training data. The
resulting compression system recovers both ends of the spectrum: On one hand,
at zero bitrate it learns a generative model of the data, and at high enough
bitrates it achieves perfect reconstruction. Furthermore, for intermediate
bitrates it smoothly interpolates between learning a generative model of the
training data and perfectly reconstructing the training samples. We study
several methods to approximately solve the proposed optimization problem,
including a novel combination of Wasserstein GAN and Wasserstein Autoencoder,
and present an extensive theoretical and empirical characterization of the
proposed compression systems.
|
We propose and study the problem of distribution-preserving lossy compression.
|
http://arxiv.org/abs/1805.11057v2
|
http://arxiv.org/pdf/1805.11057v2.pdf
|
NeurIPS 2018 12
|
[
"Michael Tschannen",
"Eirikur Agustsson",
"Mario Lucic"
] |
[
"Image Compression",
"Image Generation"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7833-deep-generative-models-for-distribution-preserving-lossy-compression
|
http://papers.nips.cc/paper/7833-deep-generative-models-for-distribution-preserving-lossy-compression.pdf
|
deep-generative-models-for-distribution-1
| null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/flexible-and-accurate-inference-and-learning
|
1805.11051
| null | null |
Flexible and accurate inference and learning for deep generative models
|
We introduce a new approach to learning in hierarchical latent-variable
generative models called the "distributed distributional code Helmholtz
machine", which emphasises flexibility and accuracy in the inferential process.
In common with the original Helmholtz machine and later variational autoencoder
algorithms (but unlike adverserial methods) our approach learns an explicit
inference or "recognition" model to approximate the posterior distribution over
the latent variables. Unlike in these earlier methods, the posterior
representation is not limited to a narrow tractable parameterised form (nor is
it represented by samples). To train the generative and recognition models we
develop an extended wake-sleep algorithm inspired by the original Helmholtz
Machine. This makes it possible to learn hierarchical latent models with both
discrete and continuous variables, where an accurate posterior representation
is essential. We demonstrate that the new algorithm outperforms current
state-of-the-art methods on synthetic, natural image patch and the MNIST data
sets.
| null |
http://arxiv.org/abs/1805.11051v1
|
http://arxiv.org/pdf/1805.11051v1.pdf
|
NeurIPS 2018 12
|
[
"Eszter Vertes",
"Maneesh Sahani"
] |
[] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7671-flexible-and-accurate-inference-and-learning-for-deep-generative-models
|
http://papers.nips.cc/paper/7671-flexible-and-accurate-inference-and-learning-for-deep-generative-models.pdf
|
flexible-and-accurate-inference-and-learning-1
| null |
[] |
https://paperswithcode.com/paper/thermostat-assisted-continuously-tempered
|
1711.11511
| null | null |
Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning
|
We propose a new sampling method, the thermostat-assisted
continuously-tempered Hamiltonian Monte Carlo, for Bayesian learning on large
datasets and multimodal distributions. It simulates the Nos\'e-Hoover dynamics
of a continuously-tempered Hamiltonian system built on the distribution of
interest. A significant advantage of this method is that it is not only able to
efficiently draw representative i.i.d. samples when the distribution contains
multiple isolated modes, but capable of adaptively neutralising the noise
arising from mini-batches and maintaining accurate sampling. While the
properties of this method have been studied using synthetic distributions,
experiments on three real datasets also demonstrated the gain of performance
over several strong baselines with various types of neural networks plunged in.
|
We propose a new sampling method, the thermostat-assisted continuously-tempered Hamiltonian Monte Carlo, for Bayesian learning on large datasets and multimodal distributions.
|
http://arxiv.org/abs/1711.11511v5
|
http://arxiv.org/pdf/1711.11511v5.pdf
|
NeurIPS 2018 12
|
[
"Rui Luo",
"Jianhong Wang",
"Yaodong Yang",
"Zhanxing Zhu",
"Jun Wang"
] |
[] | 2017-11-30T00:00:00 |
http://papers.nips.cc/paper/8266-thermostat-assisted-continuously-tempered-hamiltonian-monte-carlo-for-bayesian-learning
|
http://papers.nips.cc/paper/8266-thermostat-assisted-continuously-tempered-hamiltonian-monte-carlo-for-bayesian-learning.pdf
|
thermostat-assisted-continuously-tempered-1
| null |
[] |
https://paperswithcode.com/paper/robust-unsupervised-domain-adaptation-for
|
1711.06114
| null | null |
Robust Unsupervised Domain Adaptation for Neural Networks via Moment Alignment
|
A novel approach for unsupervised domain adaptation for neural networks is proposed. It relies on metric-based regularization of the learning process. The metric-based regularization aims at domain-invariant latent feature representations by means of maximizing the similarity between domain-specific activation distributions. The proposed metric results from modifying an integral probability metric such that it becomes less translation-sensitive on a polynomial function space. The metric has an intuitive interpretation in the dual space as the sum of differences of higher order central moments of the corresponding activation distributions. Under appropriate assumptions on the input distributions, error minimization is proven for the continuous case. As demonstrated by an analysis of standard benchmark experiments for sentiment analysis, object recognition and digit recognition, the outlined approach is robust regarding parameter changes and achieves higher classification accuracies than comparable approaches. The source code is available at https://github.com/wzell/mann.
|
A novel approach for unsupervised domain adaptation for neural networks is proposed.
|
https://arxiv.org/abs/1711.06114v4
|
https://arxiv.org/pdf/1711.06114v4.pdf
| null |
[
"Werner Zellinger",
"Bernhard A. Moser",
"Thomas Grubinger",
"Edwin Lughofer",
"Thomas Natschläger",
"Susanne Saminger-Platz"
] |
[
"Domain Adaptation",
"Object Recognition",
"Sentiment Analysis",
"Translation",
"Unsupervised Domain Adaptation"
] | 2017-11-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autoencoding-any-data-through-kernel
|
1805.11028
| null | null |
Autoencoding any Data through Kernel Autoencoders
|
This paper investigates a novel algorithmic approach to data representation based on kernel methods. Assuming that the observations lie in a Hilbert space X, the introduced Kernel Autoencoder (KAE) is the composition of mappings from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs) that minimizes the expected reconstruction error. Beyond a first extension of the autoencoding scheme to possibly infinite dimensional Hilbert spaces, KAE further allows to autoencode any kind of data by choosing X to be itself a RKHS. A theoretical analysis of the model is carried out, providing a generalization bound, and shedding light on its connection with Kernel Principal Component Analysis. The proposed algorithms are then detailed at length: they crucially rely on the form taken by the minimizers, revealed by a dedicated Representer Theorem. Finally, numerical experiments on both simulated data and real labeled graphs (molecules) provide empirical evidence of the KAE performances.
| null |
https://arxiv.org/abs/1805.11028v3
|
https://arxiv.org/pdf/1805.11028v3.pdf
| null |
[
"Pierre Laforgue",
"Stephan Clémençon",
"Florence d'Alché-Buc"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/online-influence-maximization-with-local
|
1805.11022
| null | null |
Online Influence Maximization with Local Observations
|
We consider an online influence maximization problem in which a decision
maker selects a node among a large number of possibilities and places a piece
of information at the node. The node transmits the information to some others
that are in the same connected component in a random graph. The goal of the
decision maker is to reach as many nodes as possible, with the added
complication that feedback is only available about the degree of the selected
node. Our main result shows that such local observations can be sufficient for
maximizing global influence in two broadly studied families of random graph
models: stochastic block models and Chung--Lu models. With this insight, we
propose a bandit algorithm that aims at maximizing local (and thus global)
influence, and provide its theoretical analysis in both the subcritical and
supercritical regimes of both considered models. Notably, our performance
guarantees show no explicit dependence on the total number of nodes in the
network, making our approach well-suited for large-scale applications.
| null |
http://arxiv.org/abs/1805.11022v1
|
http://arxiv.org/pdf/1805.11022v1.pdf
| null |
[
"Julia Olkhovskaya",
"Gergely Neu",
"Gábor Lugosi"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/syntactic-dependency-representations-in
|
1805.11461
| null | null |
Syntactic Dependency Representations in Neural Relation Classification
|
We investigate the use of different syntactic dependency representations in a
neural relation classification task and compare the CoNLL, Stanford Basic and
Universal Dependencies schemes. We further compare with a syntax-agnostic
approach and perform an error analysis in order to gain a better understanding
of the results.
| null |
http://arxiv.org/abs/1805.11461v1
|
http://arxiv.org/pdf/1805.11461v1.pdf
|
WS 2018 7
|
[
"Farhad Nooralahzadeh",
"Lilja Øvrelid"
] |
[
"Classification",
"General Classification",
"Relation",
"Relation Classification"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/W18-2907
|
https://aclanthology.org/W18-2907.pdf
|
syntactic-dependency-representations-in-1
| null |
[] |
https://paperswithcode.com/paper/evolutionary-algorithms
|
1805.11014
| null | null |
Evolutionary Algorithms
|
Evolutionary algorithms (EAs) are population-based metaheuristics, originally
inspired by aspects of natural evolution. Modern varieties incorporate a broad
mixture of search mechanisms, and tend to blend inspiration from nature with
pragmatic engineering concerns; however, all EAs essentially operate by
maintaining a population of potential solutions and in some way artificially
'evolving' that population over time. Particularly well-known categories of EAs
include genetic algorithms (GAs), Genetic Programming (GP), and Evolution
Strategies (ES). EAs have proven very successful in practical applications,
particularly those requiring solutions to combinatorial problems. EAs are
highly flexible and can be configured to address any optimization task, without
the requirements for reformulation and/or simplification that would be needed
for other techniques. However, this flexibility goes hand in hand with a cost:
the tailoring of an EA's configuration and parameters, so as to provide robust
performance for a given class of tasks, is often a complex and time-consuming
process. This tailoring process is one of the many ongoing research areas
associated with EAs.
| null |
http://arxiv.org/abs/1805.11014v1
|
http://arxiv.org/pdf/1805.11014v1.pdf
| null |
[
"David W. Corne",
"Michael A. Lones"
] |
[
"Evolutionary Algorithms"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-sequential-embedding-approach-for-item
|
1805.11008
| null | null |
A Sequential Embedding Approach for Item Recommendation with Heterogeneous Attributes
|
Attributes, such as metadata and profile, carry useful information which in
principle can help improve accuracy in recommender systems. However, existing
approaches have difficulty in fully leveraging attribute information due to
practical challenges such as heterogeneity and sparseness. These approaches
also fail to combine recurrent neural networks which have recently shown
effectiveness in item recommendations in applications such as video and music
browsing. To overcome the challenges and to harvest the advantages of sequence
models, we present a novel approach, Heterogeneous Attribute Recurrent Neural
Networks (HA-RNN), which incorporates heterogeneous attributes and captures
sequential dependencies in \textit{both} items and attributes. HA-RNN extends
recurrent neural networks with 1) a hierarchical attribute combination input
layer and 2) an output attribute embedding layer. We conduct extensive
experiments on two large-scale datasets. The new approach show significant
improvements over the state-of-the-art models. Our ablation experiments
demonstrate the effectiveness of the two components to address heterogeneous
attribute challenges including variable lengths and attribute sparseness. We
further investigate why sequence modeling works well by conducting exploratory
studies and show sequence models are more effective when data scale increases.
| null |
http://arxiv.org/abs/1805.11008v1
|
http://arxiv.org/pdf/1805.11008v1.pdf
| null |
[
"Kuan Liu",
"Xing Shi",
"Prem Natarajan"
] |
[
"Attribute",
"Recommendation Systems"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/soft-layer-specific-multi-task-summarization
|
1805.11004
| null | null |
Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
|
An accurate abstractive summary of a document should contain all its salient
information and should be logically entailed by the input document. We improve
these important aspects of abstractive summarization via multi-task learning
with the auxiliary tasks of question generation and entailment generation,
where the former teaches the summarization model how to look for salient
questioning-worthy details, and the latter teaches the model how to rewrite a
summary which is a directed-logical subset of the input document. We also
propose novel multi-task architectures with high-level (semantic)
layer-specific sharing across multiple encoder and decoder layers of the three
tasks, as well as soft-sharing mechanisms (and show performance ablations and
analysis examples of each contribution). Overall, we achieve statistically
significant improvements over the state-of-the-art on both the CNN/DailyMail
and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also
present several quantitative and qualitative analysis studies of our model's
learned saliency and entailment skills.
| null |
http://arxiv.org/abs/1805.11004v1
|
http://arxiv.org/pdf/1805.11004v1.pdf
|
ACL 2018 7
|
[
"Han Guo",
"Ramakanth Pasunuru",
"Mohit Bansal"
] |
[
"Abstractive Text Summarization",
"Decoder",
"Multi-Task Learning",
"Question Generation",
"Question-Generation"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/P18-1064
|
https://aclanthology.org/P18-1064.pdf
|
soft-layer-specific-multi-task-summarization-1
| null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.