paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/adversarial-examples-in-remote-sensing
|
1805.10997
| null | null |
Adversarial Examples in Remote Sensing
|
This paper considers attacks against machine learning algorithms used in
remote sensing applications, a domain that presents a suite of challenges that
are not fully addressed by current research focused on natural image data such
as ImageNet. In particular, we present a new study of adversarial examples in
the context of satellite image classification problems. Using a recently
curated data set and associated classifier, we provide a preliminary analysis
of adversarial examples in settings where the targeted classifier is permitted
multiple observations of the same location over time. While our experiments to
date are purely digital, our problem setup explicitly incorporates a number of
practical considerations that a real-world attacker would need to take into
account when mounting a physical attack. We hope this work provides a useful
starting point for future studies of potential vulnerabilities in this setting.
| null |
http://arxiv.org/abs/1805.10997v1
|
http://arxiv.org/pdf/1805.10997v1.pdf
| null |
[
"Wojciech Czaja",
"Neil Fendley",
"Michael Pekala",
"Christopher Ratto",
"I-Jeng Wang"
] |
[
"image-classification",
"Image Classification",
"Satellite Image Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/superpixel-based-segmentation-and
|
1710.07390
| null | null |
Superpixel Based Segmentation and Classification of Polyps in Wireless Capsule Endoscopy
|
Wireless Capsule Endoscopy (WCE) is a relatively new technology to record the
entire GI trace, in vivo. The large amounts of frames captured during an
examination cause difficulties for physicians to review all these frames. The
need for reducing the reviewing time using some intelligent methods has been a
challenge. Polyps are considered as growing tissues on the surface of
intestinal tract not inside of an organ. Most polyps are not cancerous, but if
one becomes larger than a centimeter, it can turn into cancer by great chance.
The WCE frames provide the early stage possibility for detection of polyps.
Here, the application of simple linear iterative clustering (SLIC) superpixel
for segmentation of polyps in WCE frames is evaluated. Different SLIC
superpixel numbers are examined to find the highest sensitivity for detection
of polyps. The SLIC superpixel segmentation is promising to improve the results
of previous studies. Finally, the superpixels were classified using a support
vector machine (SVM) by extracting some texture and color features. The
classification results showed a sensitivity of 91%.
| null |
http://arxiv.org/abs/1710.07390v2
|
http://arxiv.org/pdf/1710.07390v2.pdf
| null |
[
"Omid Haji Maghsoudi"
] |
[
"Clustering",
"General Classification",
"Segmentation",
"Sensitivity",
"Superpixels"
] | 2017-10-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/long-term-large-scale-mapping-and
|
1805.10994
| null | null |
Long-term Large-scale Mapping and Localization Using maplab
|
This paper discusses a large-scale and long-term mapping and localization
scenario using the maplab open-source framework. We present a brief overview of
the specific algorithms in the system that enable building a consistent map
from multiple sessions. We then demonstrate that such a map can be reused even
a few months later for efficient 6-DoF localization and also new trajectories
can be registered within the existing 3D model. The datasets presented in this
paper are made publicly available.
|
This paper discusses a large-scale and long-term mapping and localization scenario using the maplab open-source framework.
|
http://arxiv.org/abs/1805.10994v1
|
http://arxiv.org/pdf/1805.10994v1.pdf
| null |
[
"Marcin Dymczyk",
"Marius Fehr",
"Thomas Schneider",
"Roland Siegwart"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/resolving-event-coreference-with-supervised
|
1805.10985
| null | null |
Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization
|
We present an approach to event coreference resolution by developing a
general framework for clustering that uses supervised representation learning.
We propose a neural network architecture with novel Clustering-Oriented
Regularization (CORE) terms in the objective function. These terms encourage
the model to create embeddings of event mentions that are amenable to
clustering. We then use agglomerative clustering on these embeddings to build
event coreference chains. For both within- and cross-document coreference on
the ECB+ corpus, our model obtains better results than models that require
significantly more pre-annotated information. This work provides insight and
motivating results for a new general approach to solving coreference and
clustering problems with representation learning.
|
This work provides insight and motivating results for a new general approach to solving coreference and clustering problems with representation learning.
|
http://arxiv.org/abs/1805.10985v1
|
http://arxiv.org/pdf/1805.10985v1.pdf
|
SEMEVAL 2018 6
|
[
"Kian Kenyon-Dean",
"Jackie Chi Kit Cheung",
"Doina Precup"
] |
[
"Clustering",
"coreference-resolution",
"Coreference Resolution",
"Event Coreference Resolution",
"Representation Learning"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/S18-2001
|
https://aclanthology.org/S18-2001.pdf
|
resolving-event-coreference-with-supervised-1
| null |
[] |
https://paperswithcode.com/paper/superpixels-based-marker-tracking-vs-hue
|
1710.06473
| null | null |
Superpixels Based Marker Tracking Vs. Hue Thresholding In Rodent Biomechanics Application
|
Examining locomotion has improved our basic understanding of motor control
and aided in treating motor impairment. Mice and rats are premier models of
human disease and increasingly the model systems of choice for basic
neuroscience. High frame rates (250 Hz) are needed to quantify the kinematics
of these running rodents. Manual tracking, especially for multiple markers,
becomes time-consuming and impossible for large sample sizes. Therefore, the
need for automatic segmentation of these markers has grown in recent years. We
propose two methods to segment and track these markers: first, using SLIC
superpixels segmentation with a tracker based on position, speed, shape, and
color information of the segmented region in the previous frame; second, using
a thresholding on hue channel following up with the same tracker. The
comparison showed that the SLIC superpixels method was superior because the
segmentation was more reliable and based on both color and spatial information.
| null |
http://arxiv.org/abs/1710.06473v4
|
http://arxiv.org/pdf/1710.06473v4.pdf
| null |
[
"Omid Haji Maghsoudi",
"Annie Vahedipour Tabrizi",
"Benjamin Robertson",
"Andrew Spence"
] |
[
"Position",
"Segmentation",
"Superpixels"
] | 2017-10-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sacrificing-accuracy-for-reduced-computation
|
1805.10982
| null | null |
Dynamically Sacrificing Accuracy for Reduced Computation: Cascaded Inference Based on Softmax Confidence
|
We study the tradeoff between computational effort and classification accuracy in a cascade of deep neural networks. During inference, the user sets the acceptable accuracy degradation which then automatically determines confidence thresholds for the intermediate classifiers. As soon as the confidence threshold is met, inference terminates immediately without having to compute the output of the complete network. Confidence levels are derived directly from the softmax outputs of intermediate classifiers, as we do not train special decision functions. We show that using a softmax output as a confidence measure in a cascade of deep neural networks leads to a reduction of 15%-50% in the number of MAC operations while degrading the classification accuracy by roughly 1%. Our method can be easily incorporated into pre-trained non-cascaded architectures, as we exemplify on ResNet. Our main contribution is a method that dynamically adjusts the tradeoff between accuracy and computation without retraining the model.
|
We study the tradeoff between computational effort and classification accuracy in a cascade of deep neural networks.
|
https://arxiv.org/abs/1805.10982v2
|
https://arxiv.org/pdf/1805.10982v2.pdf
| null |
[
"Konstantin Berestizshevsky",
"Guy Even"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/adaptive-neural-network-classifier-for
|
1805.10981
| null | null |
Adaptive neural network classifier for decoding MEG signals
|
Convolutional Neural Networks (CNN) outperform traditional classification
methods in many domains. Recently these methods have gained attention in
neuroscience and particularly in brain-computer interface (BCI) community.
Here, we introduce a CNN optimized for classification of brain states from
magnetoencephalographic (MEG) measurements. Our CNN design is based on a
generative model of the electromagnetic (EEG and MEG) brain signals and is
readily interpretable in neurophysiological terms. We show here that the
proposed network is able to decode event-related responses as well as
modulations of oscillatory brain activity and that it outperforms more complex
neural networks and traditional classifiers used in the field. Importantly, the
model is robust to inter-individual differences and can successfully generalize
to new subjects in offline and online classification.
| null |
http://arxiv.org/abs/1805.10981v2
|
http://arxiv.org/pdf/1805.10981v2.pdf
| null |
[
"Ivan Zubarev",
"Rasmus Zetter",
"Hanna-Leena Halme",
"Lauri Parkkonen"
] |
[
"Brain Computer Interface",
"Classification",
"EEG",
"Electroencephalogram (EEG)",
"General Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nonlinear-supervised-dimensionality-reduction
|
1710.07120
| null | null |
Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings
|
The recovery of the intrinsic geometric structures of data collections is an
important problem in data analysis. Supervised extensions of several manifold
learning approaches have been proposed in the recent years. Meanwhile, existing
methods primarily focus on the embedding of the training data, and the
generalization of the embedding to initially unseen test data is rather
ignored. In this work, we build on recent theoretical results on the
generalization performance of supervised manifold learning algorithms.
Motivated by these performance bounds, we propose a supervised manifold
learning method that computes a nonlinear embedding while constructing a smooth
and regular interpolation function that extends the embedding to the whole data
space in order to achieve satisfactory generalization. The embedding and the
interpolator are jointly learnt such that the Lipschitz regularity of the
interpolator is imposed while ensuring the separation between different
classes. Experimental results on several image data sets show that the proposed
method outperforms traditional classifiers and the supervised dimensionality
reduction algorithms in comparison in terms of classification accuracy in most
settings.
| null |
http://arxiv.org/abs/1710.07120v2
|
http://arxiv.org/pdf/1710.07120v2.pdf
| null |
[
"Cem Ornek",
"Elif Vural"
] |
[
"Dimensionality Reduction",
"Supervised dimensionality reduction"
] | 2017-10-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/glac-net-glocal-attention-cascading-networks
|
1805.10973
| null | null |
GLAC Net: GLocal Attention Cascading Networks for Multi-image Cued Story Generation
|
The task of multi-image cued story generation, such as visual storytelling
dataset (VIST) challenge, is to compose multiple coherent sentences from a
given sequence of images. The main difficulty is how to generate image-specific
sentences within the context of overall images. Here we propose a deep learning
network model, GLAC Net, that generates visual stories by combining
global-local (glocal) attention and context cascading mechanisms. The model
incorporates two levels of attention, i.e., overall encoding level and image
feature level, to construct image-dependent sentences. While standard attention
configuration needs a large number of parameters, the GLAC Net implements them
in a very simple way via hard connections from the outputs of encoders or image
features onto the sentence generators. The coherency of the generated story is
further improved by conveying (cascading) the information of the previous
sentence to the next sentence serially. We evaluate the performance of the GLAC
Net on the visual storytelling dataset (VIST) and achieve very competitive
results compared to the state-of-the-art techniques. Our code and pre-trained
models are available here.
|
The task of multi-image cued story generation, such as visual storytelling dataset (VIST) challenge, is to compose multiple coherent sentences from a given sequence of images.
|
http://arxiv.org/abs/1805.10973v3
|
http://arxiv.org/pdf/1805.10973v3.pdf
| null |
[
"Taehyeong Kim",
"Min-Oh Heo",
"Seonil Son",
"Kyoung-Wha Park",
"Byoung-Tak Zhang"
] |
[
"Sentence",
"Story Generation",
"Visual Storytelling"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/near-lossless-binarization-of-word-embeddings
|
1803.09065
| null | null |
Near-lossless Binarization of Word Embeddings
|
Word embeddings are commonly used as a starting point in many NLP models to
achieve state-of-the-art performances. However, with a large vocabulary and
many dimensions, these floating-point representations are expensive both in
terms of memory and calculations which makes them unsuitable for use on
low-resource devices. The method proposed in this paper transforms real-valued
embeddings into binary embeddings while preserving semantic information,
requiring only 128 or 256 bits for each vector. This leads to a small memory
footprint and fast vector operations. The model is based on an autoencoder
architecture, which also allows to reconstruct original vectors from the binary
ones. Experimental results on semantic similarity, text classification and
sentiment analysis tasks show that the binarization of word embeddings only
leads to a loss of ~2% in accuracy while vector size is reduced by 97%.
Furthermore, a top-k benchmark demonstrates that using these binary vectors is
30 times faster than using real-valued vectors.
|
Word embeddings are commonly used as a starting point in many NLP models to achieve state-of-the-art performances.
|
http://arxiv.org/abs/1803.09065v3
|
http://arxiv.org/pdf/1803.09065v3.pdf
| null |
[
"Julien Tissier",
"Christophe Gravier",
"Amaury Habrard"
] |
[
"Binarization",
"Semantic Similarity",
"Semantic Textual Similarity",
"Sentiment Analysis",
"text-classification",
"Text Classification",
"Word Embeddings"
] | 2018-03-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lifelong-learning-of-spatiotemporal
|
1805.10966
| null | null |
Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization
|
Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenarios
|
Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience.
|
http://arxiv.org/abs/1805.10966v4
|
http://arxiv.org/pdf/1805.10966v4.pdf
| null |
[
"German I. Parisi",
"Jun Tani",
"Cornelius Weber",
"Stefan Wermter"
] |
[
"Active Learning",
"Continuous Object Recognition",
"Incremental Learning",
"Lifelong learning",
"Object Recognition"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lipschitz-regularity-of-deep-neural-networks
|
1805.10965
| null | null |
Lipschitz regularity of deep neural networks: analysis and efficient estimation
|
Deep neural networks are notorious for being sensitive to small well-chosen perturbations, and estimating the regularity of such architectures is of utmost importance for safe and robust practical applications. In this paper, we investigate one of the key characteristics to assess the regularity of such methods: the Lipschitz constant of deep learning architectures. First, we show that, even for two layer neural networks, the exact computation of this quantity is NP-hard and state-of-art methods may significantly overestimate it. Then, we both extend and improve previous estimation methods by providing AutoLip, the first generic algorithm for upper bounding the Lipschitz constant of any automatically differentiable function. We provide a power method algorithm working with automatic differentiation, allowing efficient computations even on large convolutions. Second, for sequential neural networks, we propose an improved algorithm named SeqLip that takes advantage of the linear computation graph to split the computation per pair of consecutive layers. Third we propose heuristics on SeqLip in order to tackle very large networks. Our experiments show that SeqLip can significantly improve on the existing upper bounds. Finally, we provide an implementation of AutoLip in the PyTorch environment that may be used to better estimate the robustness of a given neural network to small perturbations or regularize it using more precise Lipschitz estimations.
|
First, we show that, even for two layer neural networks, the exact computation of this quantity is NP-hard and state-of-art methods may significantly overestimate it.
|
https://arxiv.org/abs/1805.10965v2
|
https://arxiv.org/pdf/1805.10965v2.pdf
|
NeurIPS 2018 12
|
[
"Kevin Scaman",
"Aladin Virmaux"
] |
[] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7640-lipschitz-regularity-of-deep-neural-networks-analysis-and-efficient-estimation
|
http://papers.nips.cc/paper/7640-lipschitz-regularity-of-deep-neural-networks-analysis-and-efficient-estimation.pdf
|
lipschitz-regularity-of-deep-neural-networks-1
| null |
[] |
https://paperswithcode.com/paper/denoising-distant-supervision-for-relation
|
1805.10959
| null | null |
Denoising Distant Supervision for Relation Extraction via Instance-Level Adversarial Training
|
Existing neural relation extraction (NRE) models rely on distant supervision
and suffer from wrong labeling problems. In this paper, we propose a novel
adversarial training mechanism over instances for relation extraction to
alleviate the noise issue. As compared with previous denoising methods, our
proposed method can better discriminate those informative instances from noisy
ones. Our method is also efficient and flexible to be applied to various NRE
architectures. As shown in the experiments on a large-scale benchmark dataset
in relation extraction, our denoising method can effectively filter out noisy
instances and achieve significant improvements as compared with the
state-of-the-art models.
| null |
http://arxiv.org/abs/1805.10959v1
|
http://arxiv.org/pdf/1805.10959v1.pdf
| null |
[
"Xu Han",
"Zhiyuan Liu",
"Maosong Sun"
] |
[
"Denoising",
"Relation",
"Relation Extraction"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discrete-flow-posteriors-for-variational
|
1805.10958
| null |
HyxOIoRqFQ
|
Discrete flow posteriors for variational inference in discrete dynamical systems
|
Each training step for a variational autoencoder (VAE) requires us to sample
from the approximate posterior, so we usually choose simple (e.g. factorised)
approximate posteriors in which sampling is an efficient computation that fully
exploits GPU parallelism. However, such simple approximate posteriors are often
insufficient, as they eliminate statistical dependencies in the posterior.
While it is possible to use normalizing flow approximate posteriors for
continuous latents, some problems have discrete latents and strong statistical
dependencies. The most natural approach to model these dependencies is an
autoregressive distribution, but sampling from such distributions is inherently
sequential and thus slow. We develop a fast, parallel sampling procedure for
autoregressive distributions based on fixed-point iterations which enables
efficient and accurate variational inference in discrete state-space latent
variable dynamical systems. To optimize the variational bound, we considered
two ways to evaluate probabilities: inserting the relaxed samples directly into
the pmf for the discrete distribution, or converting to continuous logistic
latent variables and interpreting the K-step fixed-point iterations as a
normalizing flow. We found that converting to continuous latent variables gave
considerable additional scope for mismatch between the true and approximate
posteriors, which resulted in biased inferences, we thus used the former
approach. Using our fast sampling procedure, we were able to realize the
benefits of correlated posteriors, including accurate uncertainty estimates for
one cell, and accurate connectivity estimates for multiple cells, in an order
of magnitude less time.
| null |
http://arxiv.org/abs/1805.10958v1
|
http://arxiv.org/pdf/1805.10958v1.pdf
|
ICLR 2019 5
|
[
"Laurence Aitchison",
"Vincent Adam",
"Srinivas C. Turaga"
] |
[
"GPU",
"Variational Inference"
] | 2018-05-28T00:00:00 |
https://openreview.net/forum?id=HyxOIoRqFQ
|
https://openreview.net/pdf?id=HyxOIoRqFQ
|
discrete-flow-posteriors-for-variational-1
| null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/temporal-event-knowledge-acquisition-via
|
1805.10956
| null | null |
Temporal Event Knowledge Acquisition via Identifying Narratives
|
Inspired by the double temporality characteristic of narrative texts, we
propose a novel approach for acquiring rich temporal "before/after" event
knowledge across sentences in narrative stories. The double temporality states
that a narrative story often describes a sequence of events following the
chronological order and therefore, the temporal order of events matches with
their textual order. We explored narratology principles and built a weakly
supervised approach that identifies 287k narrative paragraphs from three large
text corpora. We then extracted rich temporal event knowledge from these
narrative paragraphs. Such event knowledge is shown useful to improve temporal
relation classification and outperform several recent neural network models on
the narrative cloze task.
| null |
http://arxiv.org/abs/1805.10956v1
|
http://arxiv.org/pdf/1805.10956v1.pdf
|
ACL 2018 7
|
[
"Wenlin Yao",
"Ruihong Huang"
] |
[
"General Classification",
"Relation Classification",
"Temporal Relation Classification"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/P18-1050
|
https://aclanthology.org/P18-1050.pdf
|
temporal-event-knowledge-acquisition-via-1
| null |
[] |
https://paperswithcode.com/paper/fusion-of-methods-based-on-minutiae-ridges
|
1805.10949
| null | null |
Fusion of Methods Based on Minutiae, Ridges and Pores for Robust Fingerprint Recognition
|
The use of physical and behavioral characteristics for human identification
is known as biometrics. Among the many biometrics traits available, the
fingerprint is the most widely used. The fingerprint identification is based on
the impression patterns, as the pattern of ridges and minutiae, characteristics
of first and second levels respectively. The current identification systems use
these two levels of fingerprint features due to the low cost of the sensors.
However, due the recent advances in sensor technology, it is possible to use
third level features present within the ridges, such as the perspiration pores.
Recent studies have shown that the use of third-level features can increase
security and fraud protection in biometric systems, since they are difficult to
reproduce. In addition, recent researches have also focused on multibiometrics
recognition due to its many advantages. The goal of this work was to apply
fusion techniques for fingerprint recognition in order to combine minutiae,
ridges and pore-based methods and, thus, provide more robust biometrics
recognition systems. We evaluated isotropic-based and adaptive-based automatic
pore extraction methods and the fusion of pore-based method with the
identification methods based on minutiae and ridges. The experiments were
performed on the public database PolyU HRF and showed a reduction of
approximately 16% in the Equal Error Rate compared to the best results obtained
by the methods individually.
| null |
http://arxiv.org/abs/1805.10949v1
|
http://arxiv.org/pdf/1805.10949v1.pdf
| null |
[
"Lucas Alexandre Ramos",
"Aparecido Nilceu Marana"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/implicit-ridge-regularization-provided-by-the
|
1805.10939
| null | null |
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
|
A conventional wisdom in statistical learning is that large models require strong regularization to prevent overfitting. Here we show that this rule can be violated by linear regression in the underdetermined $n\ll p$ situation under realistic conditions. Using simulations and real-life high-dimensional data sets, we demonstrate that an explicit positive ridge penalty can fail to provide any improvement over the minimum-norm least squares estimator. Moreover, the optimal value of ridge penalty in this situation can be negative. This happens when the high-variance directions in the predictor space can predict the response variable, which is often the case in the real-world high-dimensional data. In this regime, low-variance directions provide an implicit ridge regularization and can make any further positive ridge penalty detrimental. We prove that augmenting any linear model with random covariates and using minimum-norm estimator is asymptotically equivalent to adding the ridge penalty. We use a spiked covariance model as an analytically tractable example and prove that the optimal ridge penalty in this case is negative when $n\ll p$.
|
We use a spiked covariance model as an analytically tractable example and prove that the optimal ridge penalty in this case is negative when $n\ll p$.
|
https://arxiv.org/abs/1805.10939v4
|
https://arxiv.org/pdf/1805.10939v4.pdf
| null |
[
"Dmitry Kobak",
"Jonathan Lomond",
"Benoit Sanchez"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/face-hallucination-using-cascaded-super
|
1805.10938
| null | null |
Face hallucination using cascaded super-resolution and identity priors
|
In this paper we address the problem of hallucinating high-resolution facial
images from unaligned low-resolution inputs at high magnification factors. We
approach the problem with convolutional neural networks (CNNs) and propose a
novel (deep) face hallucination model that incorporates identity priors into
the learning procedure. The model consists of two main parts: i) a cascaded
super-resolution network that upscales the low-resolution images, and ii) an
ensemble of face recognition models that act as identity priors for the
super-resolution network during training. Different from competing
super-resolution approaches that typically rely on a single model for upscaling
(even with large magnification factors), our network uses a cascade of multiple
SR models that progressively upscale the low-resolution images using steps of
$2\times$. This characteristic allows us to apply supervision signals (target
appearances) at different resolutions and incorporate identity constraints at
multiple-scales. Our model is able to upscale (very) low-resolution images
captured in unconstrained conditions and produce visually convincing results.
We rigorously evaluate the proposed model on a large datasets of facial images
and report superior performance compared to the state-of-the-art.
| null |
http://arxiv.org/abs/1805.10938v2
|
http://arxiv.org/pdf/1805.10938v2.pdf
| null |
[
"Klemen Grm",
"Simon Dobrišek",
"Walter J. Scheirer",
"Vitomir Štruc"
] |
[
"Face Hallucination",
"Face Recognition",
"Hallucination",
"Super-Resolution"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inductive-framework-for-multi-aspect
|
1802.06371
| null | null |
Inductive Framework for Multi-Aspect Streaming Tensor Completion with Side Information
|
Low rank tensor completion is a well studied problem and has applications in
various fields. However, in many real world applications the data is dynamic,
i.e., new data arrives at different time intervals. As a result, the tensors
used to represent the data grow in size. Besides the tensors, in many real
world scenarios, side information is also available in the form of matrices
which also grow in size with time. The problem of predicting missing values in
the dynamically growing tensor is called dynamic tensor completion. Most of the
previous work in dynamic tensor completion make an assumption that the tensor
grows only in one mode. To the best of our Knowledge, there is no previous work
which incorporates side information with dynamic tensor completion. We bridge
this gap in this paper by proposing a dynamic tensor completion framework
called Side Information infused Incremental Tensor Analysis (SIITA), which
incorporates side information and works for general incremental tensors. We
also show how non-negative constraints can be incorporated with SIITA, which is
essential for mining interpretable latent clusters. We carry out extensive
experiments on multiple real world datasets to demonstrate the effectiveness of
SIITA in various different settings.
| null |
http://arxiv.org/abs/1802.06371v3
|
http://arxiv.org/pdf/1802.06371v3.pdf
| null |
[
"Madhav Nimishakavi",
"Bamdev Mishra",
"Manish Gupta",
"Partha Talukdar"
] |
[
"Missing Values"
] | 2018-02-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-anomaly-detection-using-geometric
|
1805.10917
| null | null |
Deep Anomaly Detection Using Geometric Transformations
|
We consider the problem of anomaly detection in images, and present a new
detection technique. Given a sample of images, all known to belong to a
"normal" class (e.g., dogs), we show how to train a deep neural model that can
detect out-of-distribution images (i.e., non-dog objects). The main idea behind
our scheme is to train a multi-class model to discriminate between dozens of
geometric transformations applied on all the given images. The auxiliary
expertise learned by the model generates feature detectors that effectively
identify, at test time, anomalous images based on the softmax activation
statistics of the model when applied on transformed images. We present
extensive experiments using the proposed detector, which indicate that our
algorithm improves state-of-the-art methods by a wide margin.
|
We consider the problem of anomaly detection in images, and present a new detection technique.
|
http://arxiv.org/abs/1805.10917v2
|
http://arxiv.org/pdf/1805.10917v2.pdf
|
NeurIPS 2018 12
|
[
"Izhak Golan",
"Ran El-Yaniv"
] |
[
"Anomaly Detection"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/8183-deep-anomaly-detection-using-geometric-transformations
|
http://papers.nips.cc/paper/8183-deep-anomaly-detection-using-geometric-transformations.pdf
|
deep-anomaly-detection-using-geometric-1
| null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/dirichlet-based-gaussian-processes-for-large
|
1805.10915
| null | null |
Dirichlet-based Gaussian Processes for Large-scale Calibrated Classification
|
In this paper, we study the problem of deriving fast and accurate
classification algorithms with uncertainty quantification. Gaussian process
classification provides a principled approach, but the corresponding
computational burden is hardly sustainable in large-scale problems and devising
efficient alternatives is a challenge. In this work, we investigate if and how
Gaussian process regression directly applied to the classification labels can
be used to tackle this question. While in this case training time is remarkably
faster, predictions need be calibrated for classification and uncertainty
estimation. To this aim, we propose a novel approach based on interpreting the
labels as the output of a Dirichlet distribution. Extensive experimental
results show that the proposed approach provides essentially the same accuracy
and uncertainty quantification of Gaussian process classification while
requiring only a fraction of computational resources.
|
In this paper, we study the problem of deriving fast and accurate classification algorithms with uncertainty quantification.
|
http://arxiv.org/abs/1805.10915v1
|
http://arxiv.org/pdf/1805.10915v1.pdf
|
NeurIPS 2018 12
|
[
"Dimitrios Milios",
"Raffaello Camoriano",
"Pietro Michiardi",
"Lorenzo Rosasco",
"Maurizio Filippone"
] |
[
"Classification",
"Gaussian Processes",
"General Classification",
"Uncertainty Quantification"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7840-dirichlet-based-gaussian-processes-for-large-scale-calibrated-classification
|
http://papers.nips.cc/paper/7840-dirichlet-based-gaussian-processes-for-large-scale-calibrated-classification.pdf
|
dirichlet-based-gaussian-processes-for-large-1
| null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/hierarchical-clustering-with-deep-q-learning
|
1805.10900
| null | null |
Hierarchical clustering with deep Q-learning
|
The reconstruction and analyzation of high energy particle physics data is
just as important as the analyzation of the structure in real world networks.
In a previous study it was explored how hierarchical clustering algorithms can
be combined with kt cluster algorithms to provide a more generic clusterization
method. Building on that, this paper explores the possibilities to involve deep
learning in the process of cluster computation, by applying reinforcement
learning techniques. The result is a model, that by learning on a modest
dataset of 10; 000 nodes during 70 epochs can reach 83; 77% precision in
predicting the appropriate clusters.
| null |
http://arxiv.org/abs/1805.10900v1
|
http://arxiv.org/pdf/1805.10900v1.pdf
| null |
[
"Richard Forster",
"Agnes Fulop"
] |
[
"Clustering",
"Q-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/machine-learning-for-prediction-of-extreme
|
1806.06121
| null | null |
Machine learning for prediction of extreme statistics in modulation instability
|
A central area of research in nonlinear science is the study of instabilities
that drive the emergence of extreme events. Unfortunately, experimental
techniques for measuring such phenomena often provide only partial
characterization. For example, real-time studies of instabilities in nonlinear
fibre optics frequently use only spectral data, precluding detailed predictions
about the associated temporal properties. Here, we show how Machine Learning
can overcome this limitation by predicting statistics for the maximum intensity
of temporal peaks in modulation instability based only on spectral
measurements. Specifically, we train a neural network based Machine Learning
model to correlate spectral and temporal properties of optical fibre modulation
instability using data from numerical simulations, and we then use this model
to predict the temporal probability distribution based on high-dynamic range
spectral data from experiments. These results open novel perspectives in all
systems exhibiting chaos and instability where direct time-domain observations
are difficult.
| null |
http://arxiv.org/abs/1806.06121v1
|
http://arxiv.org/pdf/1806.06121v1.pdf
| null |
[
"Mikko Närhi",
"Lauri Salmela",
"Juha Toivonen",
"Cyril Billet",
"John M. Dudley",
"Goëry Genty"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hyper-hue-and-emap-on-hyperspectral-images
|
1801.09472
| null | null |
Hyper-Hue and EMAP on Hyperspectral Images for Supervised Layer Decomposition of Old Master Drawings
|
Old master drawings were mostly created step by step in several layers using
different materials. To art historians and restorers, examination of these
layers brings various insights into the artistic work process and helps to
answer questions about the object, its attribution and its authenticity.
However, these layers typically overlap and are oftentimes difficult to
differentiate with the unaided eye. For example, a common layer combination is
red chalk under ink.
In this work, we propose an image processing pipeline that operates on
hyperspectral images to separate such layers. Using this pipeline, we show that
hyperspectral images enable better layer separation than RGB images, and that
spectral focus stacking aids the layer separation. In particular, we propose to
use two descriptors in hyperspectral historical document analysis, namely
hyper-hue and extended multi-attribute profile (EMAP). Our comparative results
with other features underline the efficacy of the three proposed improvements.
| null |
http://arxiv.org/abs/1801.09472v2
|
http://arxiv.org/pdf/1801.09472v2.pdf
| null |
[
"AmirAbbas Davari",
"Nikolaos Sakaltras",
"Armin Haeberle",
"Sulaiman Vesal",
"Vincent Christlein",
"Andreas Maier",
"Christian Riess"
] |
[
"Attribute"
] | 2018-01-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mapping-the-americanization-of-english-in
|
1707.00781
| null | null |
Mapping the Americanization of English in Space and Time
|
As global political preeminence gradually shifted from the United Kingdom to
the United States, so did the capacity to culturally influence the rest of the
world. In this work, we analyze how the world-wide varieties of written English
are evolving. We study both the spatial and temporal variations of vocabulary
and spelling of English using a large corpus of geolocated tweets and the
Google Books datasets corresponding to books published in the US and the UK.
The advantage of our approach is that we can address both standard written
language (Google Books) and the more colloquial forms of microblogging messages
(Twitter). We find that American English is the dominant form of English
outside the UK and that its influence is felt even within the UK borders.
Finally, we analyze how this trend has evolved over time and the impact that
some cultural events have had in shaping it.
| null |
http://arxiv.org/abs/1707.00781v2
|
http://arxiv.org/pdf/1707.00781v2.pdf
| null |
[
"Bruno Gonçalves",
"Lucía Loureiro-Porto",
"José J. Ramasco",
"David Sánchez"
] |
[] | 2017-07-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/glassy-nature-of-the-hard-phase-in-inference
|
1805.05857
| null | null |
Glassy nature of the hard phase in inference problems
|
An algorithmically hard phase was described in a range of inference problems:
even if the signal can be reconstructed with a small error from an information
theoretic point of view, known algorithms fail unless the noise-to-signal ratio
is sufficiently small. This hard phase is typically understood as a metastable
branch of the dynamical evolution of message passing algorithms. In this work
we study the metastable branch for a prototypical inference problem, the
low-rank matrix factorization, that presents a hard phase. We show that for
noise-to-signal ratios that are below the information theoretic threshold, the
posterior measure is composed of an exponential number of metastable glassy
states and we compute their entropy, called the complexity. We show that this
glassiness extends even slightly below the algorithmic threshold below which
the well-known approximate message passing (AMP) algorithm is able to closely
reconstruct the signal. Counter-intuitively, we find that the performance of
the AMP algorithm is not improved by taking into account the glassy nature of
the hard phase. This result provides further evidence that the hard phase in
inference problems is algorithmically impenetrable for some deep computational
reasons that remain to be uncovered.
| null |
http://arxiv.org/abs/1805.05857v4
|
http://arxiv.org/pdf/1805.05857v4.pdf
| null |
[
"Fabrizio Antenucci",
"Silvio Franz",
"Pierfrancesco Urbani",
"Lenka Zdeborová"
] |
[] | 2018-05-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/block-optimized-variable-bit-rate-neural
|
1805.10887
| null | null |
Block-optimized Variable Bit Rate Neural Image Compression
|
In this work, we propose an end-to-end block-based auto-encoder system for
image compression. We introduce novel contributions to neural-network based
image compression, mainly in achieving binarization simulation, variable bit
rates with multiple networks, entropy-friendly representations, inference-stage
code optimization and performance-improving normalization layers in the
auto-encoder. We evaluate and show the incremental performance increase of each
of our contributions.
| null |
http://arxiv.org/abs/1805.10887v1
|
http://arxiv.org/pdf/1805.10887v1.pdf
| null |
[
"Caglar Aytekin",
"Xingyang Ni",
"Francesco Cricri",
"Jani Lainema",
"Emre Aksu",
"Miska Hannuksela"
] |
[
"Binarization",
"Image Compression"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/importance-weighted-transfer-of-samples-in
|
1805.10886
| null | null |
Importance Weighted Transfer of Samples in Reinforcement Learning
|
We consider the transfer of experience samples (i.e., tuples < s, a, s', r >)
in reinforcement learning (RL), collected from a set of source tasks to improve
the learning process in a given target task. Most of the related approaches
focus on selecting the most relevant source samples for solving the target
task, but then all the transferred samples are used without considering anymore
the discrepancies between the task models. In this paper, we propose a
model-based technique that automatically estimates the relevance (importance
weight) of each source sample for solving the target task. In the proposed
approach, all the samples are transferred and used by a batch RL algorithm to
solve the target task, but their contribution to the learning process is
proportional to their importance weight. By extending the results for
importance weighting provided in supervised learning literature, we develop a
finite-sample analysis of the proposed batch RL algorithm. Furthermore, we
empirically compare the proposed algorithm to state-of-the-art approaches,
showing that it achieves better learning performance and is very robust to
negative transfer, even when some source tasks are significantly different from
the target task.
| null |
http://arxiv.org/abs/1805.10886v1
|
http://arxiv.org/pdf/1805.10886v1.pdf
|
ICML 2018 7
|
[
"Andrea Tirinzoni",
"Andrea Sessa",
"Matteo Pirotta",
"Marcello Restelli"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2008
|
http://proceedings.mlr.press/v80/tirinzoni18a/tirinzoni18a.pdf
|
importance-weighted-transfer-of-samples-in-1
| null |
[] |
https://paperswithcode.com/paper/image-distortion-detection-using
|
1805.10881
| null | null |
Image Distortion Detection using Convolutional Neural Network
|
Image distortion classification and detection is an important task in many
applications. For example when compressing images, if we know the exact
location of the distortion, then it is possible to re-compress images by
adjusting the local compression level dynamically. In this paper, we address
the problem of detecting the distortion region and classifying the distortion
type of a given image. We show that our model significantly outperforms the
state-of-the-art distortion classifier, and report accurate detection results
for the first time. We expect that such results prove the usefulness of our
approach in many potential applications such as image compression or distortion
restoration.
| null |
http://arxiv.org/abs/1805.10881v1
|
http://arxiv.org/pdf/1805.10881v1.pdf
| null |
[
"Namhyuk Ahn",
"Byungkon Kang",
"Kyung-Ah Sohn"
] |
[
"General Classification",
"Image Compression"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/investigating-label-noise-sensitivity-of
|
1805.10880
| null | null |
Investigating Label Noise Sensitivity of Convolutional Neural Networks for Fine Grained Audio Signal Labelling
|
We measure the effect of small amounts of systematic and random label noise
caused by slightly misaligned ground truth labels in a fine grained audio
signal labeling task. The task we choose to demonstrate these effects on is
also known as framewise polyphonic transcription or note quantized multi-f0
estimation, and transforms a monaural audio signal into a sequence of note
indicator labels. It will be shown that even slight misalignments have clearly
apparent effects, demonstrating a great sensitivity of convolutional neural
networks to label noise. The implications are clear: when using convolutional
neural networks for fine grained audio signal labeling tasks, great care has to
be taken to ensure that the annotations have precise timing, and are free from
systematic or random error as much as possible - even small misalignments will
have a noticeable impact.
|
We measure the effect of small amounts of systematic and random label noise caused by slightly misaligned ground truth labels in a fine grained audio signal labeling task.
|
http://arxiv.org/abs/1805.10880v1
|
http://arxiv.org/pdf/1805.10880v1.pdf
| null |
[
"Rainer Kelz",
"Gerhard Widmer"
] |
[
"Sensitivity"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-a-glaucoma-risk-index-based-on
|
1805.10273
| null | null |
Towards a glaucoma risk index based on simulated hemodynamics from fundus images
|
Glaucoma is the leading cause of irreversible but preventable blindness in
the world. Its major treatable risk factor is the intra-ocular pressure,
although other biomarkers are being explored to improve the understanding of
the pathophysiology of the disease. It has been recently observed that glaucoma
induces changes in the ocular hemodynamics. However, its effects on the
functional behavior of the retinal arterioles have not been studied yet. In
this paper we propose a first approach for characterizing those changes using
computational hemodynamics. The retinal blood flow is simulated using a 0D
model for a steady, incompressible non Newtonian fluid in rigid domains. The
simulation is performed on patient-specific arterial trees extracted from
fundus images. We also propose a novel feature representation technique to
comprise the outcomes of the simulation stage into a fixed length feature
vector that can be used for classification studies. Our experiments on a new
database of fundus images show that our approach is able to capture
representative changes in the hemodynamics of glaucomatous patients. Code and
data are publicly available in https://ignaciorlando.github.io.
| null |
http://arxiv.org/abs/1805.10273v4
|
http://arxiv.org/pdf/1805.10273v4.pdf
| null |
[
"José Ignacio Orlando",
"João Barbosa Breda",
"Karel van Keer",
"Matthew B. Blaschko",
"Pablo J. Blanco",
"Carlos A. Bulant"
] |
[] | 2018-05-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/recurrent-relational-networks
|
1711.08028
| null | null |
Recurrent Relational Networks
|
This paper is concerned with learning to solve tasks that require a chain of
interdependent steps of relational inference, like answering complex questions
about the relationships between objects, or solving puzzles where the smaller
elements of a solution mutually constrain each other. We introduce the
recurrent relational network, a general purpose module that operates on a graph
representation of objects. As a generalization of Santoro et al. [2017]'s
relational network, it can augment any neural network model with the capacity
to do many-step relational reasoning. We achieve state of the art results on
the bAbI textual question-answering dataset with the recurrent relational
network, consistently solving 20/20 tasks. As bAbI is not particularly
challenging from a relational reasoning point of view, we introduce
Pretty-CLEVR, a new diagnostic dataset for relational reasoning. In the
Pretty-CLEVR set-up, we can vary the question to control for the number of
relational reasoning steps that are required to obtain the answer. Using
Pretty-CLEVR, we probe the limitations of multi-layer perceptrons, relational
and recurrent relational networks. Finally, we show how recurrent relational
networks can learn to solve Sudoku puzzles from supervised training data, a
challenging task requiring upwards of 64 steps of relational reasoning. We
achieve state-of-the-art results amongst comparable methods by solving 96.6% of
the hardest Sudoku puzzles.
|
We achieve state of the art results on the bAbI textual question-answering dataset with the recurrent relational network, consistently solving 20/20 tasks.
|
http://arxiv.org/abs/1711.08028v4
|
http://arxiv.org/pdf/1711.08028v4.pdf
|
NeurIPS 2018 12
|
[
"Rasmus Berg Palm",
"Ulrich Paquet",
"Ole Winther"
] |
[
"Diagnostic",
"Question Answering",
"Relational Reasoning"
] | 2017-11-21T00:00:00 |
http://papers.nips.cc/paper/7597-recurrent-relational-networks
|
http://papers.nips.cc/paper/7597-recurrent-relational-networks.pdf
|
recurrent-relational-networks-1
| null |
[] |
https://paperswithcode.com/paper/deepproblog-neural-probabilistic-logic
|
1805.10872
| null | null |
DeepProbLog: Neural Probabilistic Logic Programming
|
We introduce DeepProbLog, a probabilistic logic programming language that
incorporates deep learning by means of neural predicates. We show how existing
inference and learning techniques can be adapted for the new language. Our
experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic
representations and inference, 1) program induction, 2) probabilistic (logic)
programming, and 3) (deep) learning from examples. To the best of our
knowledge, this work is the first to propose a framework where general-purpose
neural networks and expressive probabilistic-logical modeling and reasoning are
integrated in a way that exploits the full expressiveness and strengths of both
worlds and can be trained end-to-end based on examples.
|
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.
|
http://arxiv.org/abs/1805.10872v2
|
http://arxiv.org/pdf/1805.10872v2.pdf
|
NeurIPS 2018 12
|
[
"Robin Manhaeve",
"Sebastijan Dumančić",
"Angelika Kimmig",
"Thomas Demeester",
"Luc De Raedt"
] |
[
"Deep Learning",
"Program induction"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7632-deepproblog-neural-probabilistic-logic-programming
|
http://papers.nips.cc/paper/7632-deepproblog-neural-probabilistic-logic-programming.pdf
|
deepproblog-neural-probabilistic-logic-1
| null |
[] |
https://paperswithcode.com/paper/cerfgan-a-compact-effective-robust-and-fast
|
1805.10871
| null | null |
CerfGAN: A Compact, Effective, Robust, and Fast Model for Unsupervised Multi-Domain Image-to-Image Translation
|
In this paper, we aim at solving the multi-domain image-to-image translation
problem with a unified model in an unsupervised manner. The most successful
work in this area refers to StarGAN, which works well in tasks like face
attribute modulation. However, StarGAN is unable to match multiple translation
mappings when encountering general translations with very diverse domain
shifts. On the other hand, StarGAN adopts an Encoder-Decoder-Discriminator
(EDD) architecture, where the model is time-consuming and unstable to train. To
this end, we propose a Compact, effective, robust, and fast GAN model, termed
CerfGAN, to solve the above problem. In principle, CerfGAN contains a novel
component, i.e., a multi-class discriminator (MCD), which gives the model an
extremely powerful ability to match multiple translation mappings. To stabilize
the training process, MCD also plays a role of the encoder in CerfGAN, which
saves a lot of computation and memory costs. We perform extensive experiments
to verify the effectiveness of the proposed method. Quantitatively, CerfGAN is
demonstrated to handle a serial of image-to-image translation tasks including
style transfer, season transfer, face hallucination, etc, where the input
images are sampled from diverse domains. The comparisons to several recently
proposed approaches demonstrate the superiority and novelty of the proposed
method.
| null |
http://arxiv.org/abs/1805.10871v2
|
http://arxiv.org/pdf/1805.10871v2.pdf
| null |
[
"Xiao Liu",
"Shengchuan Zhang",
"Hong Liu",
"Xin Liu",
"Cheng Deng",
"Rongrong Ji"
] |
[
"Attribute",
"Decoder",
"Face Hallucination",
"Hallucination",
"Image-to-Image Translation",
"Style Transfer",
"Translation"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/learning-to-play-general-video-games-via-an
|
1803.05262
| null | null |
Learning to Play General Video-Games via an Object Embedding Network
|
Deep reinforcement learning (DRL) has proven to be an effective tool for
creating general video-game AI. However most current DRL video-game agents
learn end-to-end from the video-output of the game, which is superfluous for
many applications and creates a number of additional problems. More
importantly, directly working on pixel-based raw video data is substantially
distinct from what a human player does.In this paper, we present a novel method
which enables DRL agents to learn directly from object information. This is
obtained via use of an object embedding network (OEN) that compresses a set of
object feature vectors of different lengths into a single fixed-length unified
feature vector representing the current game-state and fulfills the DRL
simultaneously. We evaluate our OEN-based DRL agent by comparing to several
state-of-the-art approaches on a selection of games from the GVG-AI
Competition. Experimental results suggest that our object-based DRL agent
yields performance comparable to that of those approaches used in our
comparative study.
|
Deep reinforcement learning (DRL) has proven to be an effective tool for creating general video-game AI.
|
http://arxiv.org/abs/1803.05262v2
|
http://arxiv.org/pdf/1803.05262v2.pdf
| null |
[
"William Woof",
"Ke Chen"
] |
[
"Deep Reinforcement Learning",
"Object",
"Reinforcement Learning"
] | 2018-03-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/granger-causal-attentive-mixtures-of-experts
|
1802.02195
| null | null |
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
|
Knowledge of the importance of input features towards decisions made by
machine-learning models is essential to increase our understanding of both the
models and the underlying data. Here, we present a new approach to estimating
feature importance with neural networks based on the idea of distributing the
features of interest among experts in an attentive mixture of experts (AME).
AMEs use attentive gating networks trained with a Granger-causal objective to
learn to jointly produce accurate predictions as well as estimates of feature
importance in a single model. Our experiments show (i) that the feature
importance estimates provided by AMEs compare favourably to those provided by
state-of-the-art methods, (ii) that AMEs are significantly faster at estimating
feature importance than existing methods, and (iii) that the associations
discovered by AMEs are consistent with those reported by domain experts.
|
Knowledge of the importance of input features towards decisions made by machine-learning models is essential to increase our understanding of both the models and the underlying data.
|
http://arxiv.org/abs/1802.02195v6
|
http://arxiv.org/pdf/1802.02195v6.pdf
| null |
[
"Patrick Schwab",
"Djordje Miladinovic",
"Walter Karlen"
] |
[
"Feature Importance",
"Mixture-of-Experts"
] | 2018-02-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/versatile-auxiliary-regressor-with-generative
|
1805.10864
| null | null |
Versatile Auxiliary Regressor with Generative Adversarial network (VAR+GAN)
|
Being able to generate constrained samples is one of the most appealing
applications of the deep generators. Conditional generators are one of the
successful implementations of such models wherein the created samples are
constrained to a specific class. In this work, the application of these
networks is extended to regression problems wherein the conditional generator
is restrained to any continuous aspect of the data. A new loss function is
presented for the regression network and also implementations for generating
faces with any particular set of landmarks is provided.
| null |
http://arxiv.org/abs/1805.10864v1
|
http://arxiv.org/pdf/1805.10864v1.pdf
| null |
[
"Shabab Bazrafkan",
"Peter Corcoran"
] |
[
"Face Generation",
"Generative Adversarial Network",
"regression"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributed-weight-consolidation-a-brain
|
1805.10863
| null | null |
Distributed Weight Consolidation: A Brain Segmentation Case Study
|
Collecting the large datasets needed to train deep neural networks can be
very difficult, particularly for the many applications for which sharing and
pooling data is complicated by practical, ethical, or legal concerns. However,
it may be the case that derivative datasets or predictive models developed
within individual sites can be shared and combined with fewer restrictions.
Training on distributed data and combining the resulting networks is often
viewed as continual learning, but these methods require networks to be trained
sequentially. In this paper, we introduce distributed weight consolidation
(DWC), a continual learning method to consolidate the weights of separate
neural networks, each trained on an independent dataset. We evaluated DWC with
a brain segmentation case study, where we consolidated dilated convolutional
neural networks trained on independent structural magnetic resonance imaging
(sMRI) datasets from different sites. We found that DWC led to increased
performance on test sets from the different sites, while maintaining
generalization performance for a very large and completely independent
multi-site dataset, compared to an ensemble baseline.
| null |
http://arxiv.org/abs/1805.10863v9
|
http://arxiv.org/pdf/1805.10863v9.pdf
|
NeurIPS 2018 12
|
[
"Patrick McClure",
"Charles Y. Zheng",
"Jakub R. Kaczmarzyk",
"John A. Lee",
"Satrajit S. Ghosh",
"Dylan Nielson",
"Peter Bandettini",
"Francisco Pereira"
] |
[
"Brain Segmentation",
"Continual Learning"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7664-distributed-weight-consolidation-a-brain-segmentation-case-study
|
http://papers.nips.cc/paper/7664-distributed-weight-consolidation-a-brain-segmentation-case-study.pdf
|
distributed-weight-consolidation-a-brain-1
| null |
[] |
https://paperswithcode.com/paper/a-non-invertible-cancelable-fingerprint
|
1805.10853
| null | null |
A non-invertible cancelable fingerprint template generation based on ridge feature transformation
|
In a biometric verification system, leakage of biometric data leads to
permanent identity loss since original biometric data is inherently linked to a
user. Further, various types of attacks on a biometric system may reveal the
original template and utility in other applications. To address these security
and privacy concerns cancelable biometric has been introduced. Cancelable
biometric constructs a protected template from the original biometric template
using transformation functions and performs the comparison between templates in
the transformed domain. Recent approaches towards cancelable fingerprint
generation either rely on aligning minutiae points with respect to singular
points (core/delta) or utilize the absolute coordinate positions of minutiae
points. In this paper, we propose a novel non-invertible ridge feature
transformation method to protect the original fingerprint template information.
The proposed method partitions the fingerprint region into a number of sectors
with reference to each minutia point employing a ridge-based co-ordinate
system. The nearest neighbor minutiae in each sector are identified, and
ridge-based features are computed. Further, a cancelable template is generated
by applying the Cantor pairing function followed by random projection. We have
evaluated our method with FVC2002, FVC2004 and FVC2006 databases. It is evident
from the experimental results that the proposed method outperforms existing
methods in the literature. Moreover, the security analysis demonstrates that
the proposed method fulfills the necessary requirements of non-invertibility,
revocability, and diversity with a minor performance degradation caused due to
cancelable transformation.
| null |
http://arxiv.org/abs/1805.10853v1
|
http://arxiv.org/pdf/1805.10853v1.pdf
| null |
[
"Rudresh Dwivedi",
"Somnath Dey"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-pragmatic-ai-approach-to-creating-artistic
|
1805.10852
| null | null |
A Pragmatic AI Approach to Creating Artistic Visual Variations by Neural Style Transfer
|
On a constant quest for inspiration, designers can become more effective with
tools that facilitate their creative process and let them overcome design
fixation. This paper explores the practicality of applying neural style
transfer as an emerging design tool for generating creative digital content. To
this aim, the present work explores a well-documented neural style transfer
algorithm (Johnson 2016) in four experiments on four relevant visual
parameters: number of iterations, learning rate, total variation, content vs.
style weight. The results allow a pragmatic recommendation of parameter
configuration (number of iterations: 200 to 300, learning rate: 2e-1 to 4e-1,
total variation: 1e-4 to 1e-8, content weights vs. style weights: 50:100 to
200:100) that saves extensive experimentation time and lowers the technical
entry barrier. With this rule-of-thumb insight, visual designers can
effectively apply deep learning to create artistic visual variations of digital
content. This could enable designers to leverage AI for creating design works
as state-of-the-art.
| null |
http://arxiv.org/abs/1805.10852v1
|
http://arxiv.org/pdf/1805.10852v1.pdf
| null |
[
"Chaehan So"
] |
[
"Style Transfer"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inducing-grammars-with-and-for-neural-machine
|
1805.10850
| null |
Bkl1uWb0Z
|
Inducing Grammars with and for Neural Machine Translation
|
Machine translation systems require semantic knowledge and grammatical
understanding. Neural machine translation (NMT) systems often assume this
information is captured by an attention mechanism and a decoder that ensures
fluency. Recent work has shown that incorporating explicit syntax alleviates
the burden of modeling both types of knowledge. However, requiring parses is
expensive and does not explore the question of what syntax a model needs during
translation. To address both of these issues we introduce a model that
simultaneously translates while inducing dependency trees. In this way, we
leverage the benefits of structure while investigating what syntax NMT must
induce to maximize performance. We show that our dependency trees are 1.
language pair dependent and 2. improve translation quality.
| null |
http://arxiv.org/abs/1805.10850v1
|
http://arxiv.org/pdf/1805.10850v1.pdf
|
ACL 2018 7
|
[
"Ke Tran",
"Yonatan Bisk"
] |
[
"Decoder",
"Machine Translation",
"NMT",
"Translation"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/W18-2704
|
https://aclanthology.org/W18-2704.pdf
|
inducing-grammars-with-and-for-neural-machine-2
| null |
[] |
https://paperswithcode.com/paper/a-stochastic-decoder-for-neural-machine
|
1805.10844
| null | null |
A Stochastic Decoder for Neural Machine Translation
|
The process of translation is ambiguous, in that there are typically many
valid trans- lations for a given sentence. This gives rise to significant
variation in parallel cor- pora, however, most current models of machine
translation do not account for this variation, instead treating the prob- lem
as a deterministic process. To this end, we present a deep generative model of
machine translation which incorporates a chain of latent variables, in order to
ac- count for local lexical and syntactic varia- tion in parallel corpora. We
provide an in- depth analysis of the pitfalls encountered in variational
inference for training deep generative models. Experiments on sev- eral
different language pairs demonstrate that the model consistently improves over
strong baselines.
|
The process of translation is ambiguous, in that there are typically many valid trans- lations for a given sentence.
|
http://arxiv.org/abs/1805.10844v1
|
http://arxiv.org/pdf/1805.10844v1.pdf
|
ACL 2018 7
|
[
"Philip Schulz",
"Wilker Aziz",
"Trevor Cohn"
] |
[
"Decoder",
"Machine Translation",
"Sentence",
"Translation",
"valid",
"Variational Inference"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/P18-1115
|
https://aclanthology.org/P18-1115.pdf
|
a-stochastic-decoder-for-neural-machine-1
| null |
[] |
https://paperswithcode.com/paper/object-recognition-from-very-few-training
|
1709.05910
| null | null |
Object Recognition from very few Training Examples for Enhancing Bicycle Maps
|
In recent years, data-driven methods have shown great success for extracting
information about the infrastructure in urban areas. These algorithms are
usually trained on large datasets consisting of thousands or millions of
labeled training examples. While large datasets have been published regarding
cars, for cyclists very few labeled data is available although appearance,
point of view, and positioning of even relevant objects differ. Unfortunately,
labeling data is costly and requires a huge amount of work. In this paper, we
thus address the problem of learning with very few labels. The aim is to
recognize particular traffic signs in crowdsourced data to collect information
which is of interest to cyclists. We propose a system for object recognition
that is trained with only 15 examples per class on average. To achieve this, we
combine the advantages of convolutional neural networks and random forests to
learn a patch-wise classifier. In the next step, we map the random forest to a
neural network and transform the classifier to a fully convolutional network.
Thereby, the processing of full images is significantly accelerated and
bounding boxes can be predicted. Finally, we integrate data of the Global
Positioning System (GPS) to localize the predictions on the map. In comparison
to Faster R-CNN and other networks for object recognition or algorithms for
transfer learning, we considerably reduce the required amount of labeled data.
We demonstrate good performance on the recognition of traffic signs for
cyclists as well as their localization in maps.
| null |
http://arxiv.org/abs/1709.05910v4
|
http://arxiv.org/pdf/1709.05910v4.pdf
| null |
[
"Christoph Reinders",
"Hanno Ackermann",
"Michael Ying Yang",
"Bodo Rosenhahn"
] |
[
"Object Recognition",
"Transfer Learning"
] | 2017-09-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Region Proposal Network**, or **RPN**, is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals. RPN and algorithms like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) can be merged into a single network by sharing their convolutional features - using the recently popular terminology of neural networks with attention mechanisms, the RPN component tells the unified network where to look.\r\n\r\nRPNs are designed to efficiently predict region proposals with a wide range of scales and aspect ratios. RPNs use anchor boxes that serve as references at multiple scales and aspect ratios. The scheme can be thought of as a pyramid of regression references, which avoids enumerating images or filters of multiple scales or aspect ratios.",
"full_name": "Region Proposal Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Region Proposal",
"parent": null
},
"name": "RPN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/5e9ebe8dadc0ea2841a46cfcd82a93b4ce0d4519/torchvision/ops/roi_pool.py#L10",
"description": "**Region of Interest Pooling**, or **RoIPool**, is an operation for extracting a small feature map (e.g., $7×7$) from each RoI in detection and segmentation based tasks. Features are extracted from each candidate box, and thereafter in models like [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn), are then classified and bounding box regression performed.\r\n\r\nThe actual scaling to, e.g., $7×7$, occurs by dividing the region proposal into equally sized sections, finding the largest value in each section, and then copying these max values to the output buffer. In essence, **RoIPool** is [max pooling](https://paperswithcode.com/method/max-pooling) on a discrete grid based on a box.\r\n\r\nImage Source: [Joyce Xu](https://towardsdatascience.com/deep-learning-for-object-detection-a-comprehensive-review-73930816d8d9)",
"full_name": "RoIPool",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**RoI Feature Extractors** are used to extract regions of interest features for tasks such as object detection. Below you can find a continuously updating list of RoI Feature Extractors.",
"name": "RoI Feature Extractors",
"parent": null
},
"name": "RoIPool",
"source_title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"source_url": "http://arxiv.org/abs/1311.2524v5"
},
{
"code_snippet_url": "https://github.com/chenyuntc/simple-faster-rcnn-pytorch/blob/367db367834efd8a2bc58ee0023b2b628a0e474d/model/faster_rcnn.py#L22",
"description": "**Faster R-CNN** is an object detection model that improves on [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) by utilising a region proposal network ([RPN](https://paperswithcode.com/method/rpn)) with the CNN model. The RPN shares full-image convolutional features with the detection network, enabling nearly cost-free region proposals. It is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by [Fast R-CNN](https://paperswithcode.com/method/fast-r-cnn) for detection. RPN and Fast [R-CNN](https://paperswithcode.com/method/r-cnn) are merged into a single network by sharing their convolutional features: the RPN component tells the unified network where to look.\r\n\r\nAs a whole, Faster R-CNN consists of two modules. The first module is a deep fully convolutional network that proposes regions, and the second module is the Fast R-CNN detector that uses the proposed regions.",
"full_name": "Faster R-CNN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Object Detection Models** are architectures used to perform the task of object detection. Below you can find a continuously updating list of object detection models.",
"name": "Object Detection Models",
"parent": null
},
"name": "Faster R-CNN",
"source_title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks",
"source_url": "http://arxiv.org/abs/1506.01497v3"
}
] |
https://paperswithcode.com/paper/approximating-real-time-recurrent-learning
|
1805.10842
| null | null |
Approximating Real-Time Recurrent Learning with Random Kronecker Factors
|
Despite all the impressive advances of recurrent neural networks, sequential
data is still in need of better modelling. Truncated backpropagation through
time (TBPTT), the learning algorithm most widely used in practice, suffers from
the truncation bias, which drastically limits its ability to learn long-term
dependencies.The Real-Time Recurrent Learning algorithm (RTRL) addresses this
issue, but its high computational requirements make it infeasible in practice.
The Unbiased Online Recurrent Optimization algorithm (UORO) approximates RTRL
with a smaller runtime and memory cost, but with the disadvantage of obtaining
noisy gradients that also limit its practical applicability. In this paper we
propose the Kronecker Factored RTRL (KF-RTRL) algorithm that uses a Kronecker
product decomposition to approximate the gradients for a large class of RNNs.
We show that KF-RTRL is an unbiased and memory efficient online learning
algorithm. Our theoretical analysis shows that, under reasonable assumptions,
the noise introduced by our algorithm is not only stable over time but also
asymptotically much smaller than the one of the UORO algorithm. We also confirm
these theoretical results experimentally. Further, we show empirically that the
KF-RTRL algorithm captures long-term dependencies and almost matches the
performance of TBPTT on real world tasks by training Recurrent Highway Networks
on a synthetic string memorization task and on the Penn TreeBank task,
respectively. These results indicate that RTRL based approaches might be a
promising future alternative to TBPTT.
| null |
http://arxiv.org/abs/1805.10842v2
|
http://arxiv.org/pdf/1805.10842v2.pdf
|
NeurIPS 2018 12
|
[
"Asier Mujika",
"Florian Meier",
"Angelika Steger"
] |
[
"Memorization"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7894-approximating-real-time-recurrent-learning-with-random-kronecker-factors
|
http://papers.nips.cc/paper/7894-approximating-real-time-recurrent-learning-with-random-kronecker-factors.pdf
|
approximating-real-time-recurrent-learning-1
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Unbiased Online Recurrent Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "UORO",
"source_title": "Unbiased Online Recurrent Optimization",
"source_url": "http://arxiv.org/abs/1702.05043v3"
}
] |
https://paperswithcode.com/paper/bayesian-learning-with-wasserstein
|
1805.10833
| null | null |
Bayesian Learning with Wasserstein Barycenters
|
We introduce and study a novel model-selection strategy for Bayesian learning, based on optimal transport, along with its associated predictive posterior law: the Wasserstein population barycenter of the posterior law over models. We first show how this estimator, termed Bayesian Wasserstein barycenter (BWB), arises naturally in a general, parameter-free Bayesian model-selection framework, when the considered Bayesian risk is the Wasserstein distance. Examples are given, illustrating how the BWB extends some classic parametric and non-parametric selection strategies. Furthermore, we also provide explicit conditions granting the existence and statistical consistency of the BWB, and discuss some of its general and specific properties, providing insights into its advantages compared to usual choices, such as the model average estimator. Finally, we illustrate how this estimator can be computed using the stochastic gradient descent (SGD) algorithm in Wasserstein space introduced in a companion paper arXiv:2201.04232v2 [math.OC], and provide a numerical example for experimental validation of the proposed method.
| null |
https://arxiv.org/abs/1805.10833v5
|
https://arxiv.org/pdf/1805.10833v5.pdf
| null |
[
"Julio Backhoff-Veraguas",
"Joaquin Fontbona",
"Gonzalo Rios",
"Felipe Tobar"
] |
[
"Model Selection"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sigsoftmax-reanalysis-of-the-softmax
|
1805.10829
| null | null |
Sigsoftmax: Reanalysis of the Softmax Bottleneck
|
Softmax is an output activation function for modeling categorical probability
distributions in many applications of deep learning. However, a recent study
revealed that softmax can be a bottleneck of representational capacity of
neural networks in language modeling (the softmax bottleneck). In this paper,
we propose an output activation function for breaking the softmax bottleneck
without additional parameters. We re-analyze the softmax bottleneck from the
perspective of the output set of log-softmax and identify the cause of the
softmax bottleneck. On the basis of this analysis, we propose sigsoftmax, which
is composed of a multiplication of an exponential function and sigmoid
function. Sigsoftmax can break the softmax bottleneck. The experiments on
language modeling demonstrate that sigsoftmax and mixture of sigsoftmax
outperform softmax and mixture of softmax, respectively.
| null |
http://arxiv.org/abs/1805.10829v1
|
http://arxiv.org/pdf/1805.10829v1.pdf
|
NeurIPS 2018 12
|
[
"Sekitoshi Kanai",
"Yasuhiro Fujiwara",
"Yuki Yamanaka",
"Shuichi Adachi"
] |
[
"Language Modeling",
"Language Modelling"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7312-sigsoftmax-reanalysis-of-the-softmax-bottleneck
|
http://papers.nips.cc/paper/7312-sigsoftmax-reanalysis-of-the-softmax-bottleneck.pdf
|
sigsoftmax-reanalysis-of-the-softmax-1
| null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/ug18-at-semeval-2018-task-1-generating
|
1805.10824
| null | null |
UG18 at SemEval-2018 Task 1: Generating Additional Training Data for Predicting Emotion Intensity in Spanish
|
The present study describes our submission to SemEval 2018 Task 1: Affect in
Tweets. Our Spanish-only approach aimed to demonstrate that it is beneficial to
automatically generate additional training data by (i) translating training
data from other languages and (ii) applying a semi-supervised learning method.
We find strong support for both approaches, with those models outperforming our
regular models in all subtasks. However, creating a stepwise ensemble of
different models as opposed to simply averaging did not result in an increase
in performance. We placed second (EI-Reg), second (EI-Oc), fourth (V-Reg) and
fifth (V-Oc) in the four Spanish subtasks we participated in.
| null |
http://arxiv.org/abs/1805.10824v1
|
http://arxiv.org/pdf/1805.10824v1.pdf
|
SEMEVAL 2018 6
|
[
"Marloes Kuijper",
"Mike van Lenthe",
"Rik van Noord"
] |
[] | 2018-05-28T00:00:00 |
https://aclanthology.org/S18-1041
|
https://aclanthology.org/S18-1041.pdf
|
ug18-at-semeval-2018-task-1-generating-1
| null |
[] |
https://paperswithcode.com/paper/quadrature-based-features-for-kernel
|
1802.03832
| null |
H1U_af-0-
|
Quadrature-based features for kernel approximation
|
We consider the problem of improving kernel approximation via randomized
feature maps. These maps arise as Monte Carlo approximation to integral
representations of kernel functions and scale up kernel methods for larger
datasets. Based on an efficient numerical integration technique, we propose a
unifying approach that reinterprets the previous random features methods and
extends to better estimates of the kernel approximation. We derive the
convergence behaviour and conduct an extensive empirical study that supports
our hypothesis.
|
We consider the problem of improving kernel approximation via randomized feature maps.
|
http://arxiv.org/abs/1802.03832v4
|
http://arxiv.org/pdf/1802.03832v4.pdf
|
ICLR 2018 1
|
[
"Marina Munkhoeva",
"Yermek Kapushev",
"Evgeny Burnaev",
"Ivan Oseledets"
] |
[
"Numerical Integration"
] | 2018-02-11T00:00:00 |
http://papers.nips.cc/paper/8128-quadrature-based-features-for-kernel-approximation
|
http://papers.nips.cc/paper/8128-quadrature-based-features-for-kernel-approximation.pdf
|
quadrature-based-features-for-kernel-1
| null |
[] |
https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box
|
1805.10820
| null | null |
Local Rule-Based Explanations of Black Box Decision Systems
|
The recent years have witnessed the rise of accurate but obscure decision
systems which hide the logic of their internal decision processes to the users.
The lack of explanations for the decisions of black box systems is a key
ethical issue, and a limitation to the adoption of machine learning components
in socially sensitive and safety-critical contexts. %Therefore, we need
explanations that reveals the reasons why a predictor takes a certain decision.
In this paper we focus on the problem of black box outcome explanation, i.e.,
explaining the reasons of the decision taken on a specific instance. We propose
LORE, an agnostic method able to provide interpretable and faithful
explanations. LORE first leans a local interpretable predictor on a synthetic
neighborhood generated by a genetic algorithm. Then it derives from the logic
of the local interpretable predictor a meaningful explanation consisting of: a
decision rule, which explains the reasons of the decision; and a set of
counterfactual rules, suggesting the changes in the instance's features that
lead to a different outcome. Wide experiments show that LORE outperforms
existing methods and baselines both in the quality of explanations and in the
accuracy in mimicking the black box.
|
Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome.
|
http://arxiv.org/abs/1805.10820v1
|
http://arxiv.org/pdf/1805.10820v1.pdf
| null |
[
"Riccardo Guidotti",
"Anna Monreale",
"Salvatore Ruggieri",
"Dino Pedreschi",
"Franco Turini",
"Fosca Giannotti"
] |
[
"counterfactual"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tempogan-a-temporally-coherent-volumetric-gan
|
1801.09710
| null | null |
tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow
|
We propose a temporally coherent generative model addressing the
super-resolution problem for fluid flows. Our work represents a first approach
to synthesize four-dimensional physics fields with neural networks. Based on a
conditional generative adversarial network that is designed for the inference
of three-dimensional volumetric data, our model generates consistent and
detailed results by using a novel temporal discriminator, in addition to the
commonly used spatial one. Our experiments show that the generator is able to
infer more realistic high-resolution details by using additional physical
quantities, such as low-resolution velocities or vorticities. Besides
improvements in the training process and in the generated outputs, these inputs
offer means for artistic control as well. We additionally employ a
physics-aware data augmentation step, which is crucial to avoid overfitting and
to reduce memory requirements. In this way, our network learns to generate
advected quantities with highly detailed, realistic, and temporally coherent
features. Our method works instantaneously, using only a single time-step of
low-resolution fluid data. We demonstrate the abilities of our method using a
variety of complex inputs and applications in two and three dimensions.
|
We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows.
|
http://arxiv.org/abs/1801.09710v2
|
http://arxiv.org/pdf/1801.09710v2.pdf
| null |
[
"You Xie",
"Erik Franz",
"Mengyu Chu",
"Nils Thuerey"
] |
[
"Data Augmentation",
"Generative Adversarial Network",
"Super-Resolution"
] | 2018-01-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/linear-tsne-optimization-for-the-web
|
1805.10817
| null | null |
GPGPU Linear Complexity t-SNE Optimization
|
The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm has become in recent years one of the most used and insightful techniques for the exploratory data analysis of high-dimensional data. tSNE reveals clusters of high-dimensional data points at different scales while it requires only minimal tuning of its parameters. Despite these advantages, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of tSNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the tSNE embedding for large datasets. In this work, we present a novel approach to the minimization of the tSNE objective function that heavily relies on modern graphics hardware and has linear computational complexity. Our technique does not only beat the state of the art, but can even be executed on the client side in a browser. We propose to approximate the repulsion forces between data points using adaptive-resolution textures that are drawn at every iteration with WebGL. This approximation allows us to reformulate the tSNE minimization problem as a series of tensor operation that are computed with TensorFlow.js, a JavaScript library for scalable tensor computations.
|
The t-distributed Stochastic Neighbor Embedding (tSNE) algorithm has become in recent years one of the most used and insightful techniques for the exploratory data analysis of high-dimensional data.
|
https://arxiv.org/abs/1805.10817v2
|
https://arxiv.org/pdf/1805.10817v2.pdf
| null |
[
"Nicola Pezzotti",
"Julian Thijssen",
"Alexander Mordvintsev",
"Thomas Hollt",
"Baldur van Lew",
"Boudewijn P. F. Lelieveldt",
"Elmar Eisemann",
"Anna Vilanova"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-renewal-model-of-intrusion
|
1709.08163
| null | null |
A Renewal Model of Intrusion
|
We present a probabilistic model of an intrusion in a renewal process. Given
a process and a sequence of events, an intrusion is a subsequence of events
that is not produced by the process. Applications of the model are, for
example, online payment fraud with the fraudster taking over a user's account
and performing payments on the user's behalf, or unexpected equipment failures
due to unintended use.
We adopt Bayesian approach to infer the probability of an intrusion in a
sequence of events, a MAP subsequence of events constituting the intrusion, and
the marginal probability of each event in a sequence to belong to the
intrusion. We evaluate the model for intrusion detection on synthetic data and
on anonymized data from an online payment system.
|
We present a probabilistic model of an intrusion in a renewal process.
|
http://arxiv.org/abs/1709.08163v5
|
http://arxiv.org/pdf/1709.08163v5.pdf
| null |
[
"David Tolpin"
] |
[
"Intrusion Detection",
"model"
] | 2017-09-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-formalizing-fairness-in-prediction-with
|
1710.03184
| null | null |
On Formalizing Fairness in Prediction with Machine Learning
|
Machine learning algorithms for prediction are increasingly being used in
critical decisions affecting human lives. Various fairness formalizations, with
no firm consensus yet, are employed to prevent such algorithms from
systematically discriminating against people based on certain attributes
protected by law. The aim of this article is to survey how fairness is
formalized in the machine learning literature for the task of prediction and
present these formalizations with their corresponding notions of distributive
justice from the social sciences literature. We provide theoretical as well as
empirical critiques of these notions from the social sciences literature and
explain how these critiques limit the suitability of the corresponding fairness
formalizations to certain domains. We also suggest two notions of distributive
justice which address some of these critiques and discuss avenues for
prospective fairness formalizations.
| null |
http://arxiv.org/abs/1710.03184v3
|
http://arxiv.org/pdf/1710.03184v3.pdf
| null |
[
"Pratik Gajane",
"Mykola Pechenizkiy"
] |
[
"BIG-bench Machine Learning",
"Fairness",
"Prediction"
] | 2017-10-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-dynamic-routing-based-on-weighted-kernel
|
1805.10807
| null | null |
Fast Dynamic Routing Based on Weighted Kernel Density Estimation
|
Capsules as well as dynamic routing between them are most recently proposed
structures for deep neural networks. A capsule groups data into vectors or
matrices as poses rather than conventional scalars to represent specific
properties of target instance. Besides of pose, a capsule should be attached
with a probability (often denoted as activation) for its presence. The dynamic
routing helps capsules achieve more generalization capacity with many fewer
model parameters. However, the bottleneck that prevents widespread applications
of capsule is the expense of computation during routing. To address this
problem, we generalize existing routing methods within the framework of
weighted kernel density estimation, and propose two fast routing methods with
different optimization strategies. Our methods prompt the time efficiency of
routing by nearly 40\% with negligible performance degradation. By stacking a
hybrid of convolutional layers and capsule layers, we construct a network
architecture to handle inputs at a resolution of $64\times{64}$ pixels. The
proposed models achieve a parallel performance with other leading methods in
multiple benchmarks.
|
Capsules as well as dynamic routing between them are most recently proposed structures for deep neural networks.
|
http://arxiv.org/abs/1805.10807v2
|
http://arxiv.org/pdf/1805.10807v2.pdf
| null |
[
"Suofei Zhang",
"Wei Zhao",
"Xiaofu Wu",
"Quan Zhou"
] |
[
"Density Estimation",
"Image Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-relationship-detection-based-on-guided
|
1805.10802
| null | null |
Visual Relationship Detection Based on Guided Proposals and Semantic Knowledge Distillation
|
A thorough comprehension of image content demands a complex grasp of the
interactions that may occur in the natural world. One of the key issues is to
describe the visual relationships between objects. When dealing with real world
data, capturing these very diverse interactions is a difficult problem. It can
be alleviated by incorporating common sense in a network. For this, we propose
a framework that makes use of semantic knowledge and estimates the relevance of
object pairs during both training and test phases. Extracted from precomputed
models and training annotations, this information is distilled into the neural
network dedicated to this task. Using this approach, we observe a significant
improvement on all classes of Visual Genome, a challenging visual relationship
dataset. A 68.5% relative gain on the recall at 100 is directly related to the
relevance estimate and a 32.7% gain to the knowledge distillation.
| null |
http://arxiv.org/abs/1805.10802v1
|
http://arxiv.org/pdf/1805.10802v1.pdf
| null |
[
"François Plesse",
"Alexandru Ginsca",
"Bertrand Delezoide",
"Françoise Prêteux"
] |
[
"Common Sense Reasoning",
"Knowledge Distillation",
"Relationship Detection",
"Visual Relationship Detection"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/opennmt-neural-machine-translation-toolkit
|
1805.11462
| null | null |
OpenNMT: Neural Machine Translation Toolkit
|
OpenNMT is an open-source toolkit for neural machine translation (NMT). The
system prioritizes efficiency, modularity, and extensibility with the goal of
supporting NMT research into model architectures, feature representations, and
source modalities, while maintaining competitive performance and reasonable
training requirements. The toolkit consists of modeling and translation
support, as well as detailed pedagogical documentation about the underlying
techniques. OpenNMT has been used in several production MT systems, modified
for numerous research papers, and is implemented across several deep learning
frameworks.
|
OpenNMT is an open-source toolkit for neural machine translation (NMT).
|
http://arxiv.org/abs/1805.11462v1
|
http://arxiv.org/pdf/1805.11462v1.pdf
|
WS 2018 3
|
[
"Guillaume Klein",
"Yoon Kim",
"Yuntian Deng",
"Vincent Nguyen",
"Jean Senellart",
"Alexander M. Rush"
] |
[
"Machine Translation",
"NMT",
"Translation"
] | 2018-05-28T00:00:00 |
https://aclanthology.org/W18-1817
|
https://aclanthology.org/W18-1817.pdf
|
opennmt-neural-machine-translation-toolkit-1
| null |
[] |
https://paperswithcode.com/paper/interactive-text2pickup-network-for-natural
|
1805.10799
| null | null |
Interactive Text2Pickup Network for Natural Language based Human-Robot Collaboration
|
In this paper, we propose the Interactive Text2Pickup (IT2P) network for
human-robot collaboration which enables an effective interaction with a human
user despite the ambiguity in user's commands. We focus on the task where a
robot is expected to pick up an object instructed by a human, and to interact
with the human when the given instruction is vague. The proposed network
understands the command from the human user and estimates the position of the
desired object first. To handle the inherent ambiguity in human language
commands, a suitable question which can resolve the ambiguity is generated. The
user's answer to the question is combined with the initial command and given
back to the network, resulting in more accurate estimation. The experiment
results show that given unambiguous commands, the proposed method can estimate
the position of the requested object with an accuracy of 98.49% based on our
test dataset. Given ambiguous language commands, we show that the accuracy of
the pick up task increases by 1.94 times after incorporating the information
obtained from the interaction.
|
To handle the inherent ambiguity in human language commands, a suitable question which can resolve the ambiguity is generated.
|
http://arxiv.org/abs/1805.10799v1
|
http://arxiv.org/pdf/1805.10799v1.pdf
| null |
[
"Hyemin Ahn",
"Sungjoon Choi",
"Nuri Kim",
"Geonho Cha",
"Songhwai Oh"
] |
[
"Object",
"Position"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/convolutional-neural-network-compression-for
|
1805.10796
| null | null |
Convolutional neural network compression for natural language processing
|
Convolutional neural networks are modern models that are very efficient in
many classification tasks. They were originally created for image processing
purposes. Then some trials were performed to use them in different domains like
natural language processing. The artificial intelligence systems (like humanoid
robots) are very often based on embedded systems with constraints on memory,
power consumption etc. Therefore convolutional neural network because of its
memory capacity should be reduced to be mapped to given hardware. In this
paper, results are presented of compressing the efficient convolutional neural
networks for sentiment analysis. The main steps are quantization and pruning
processes. The method responsible for mapping compressed network to FPGA and
results of this implementation are presented. The described simulations showed
that 5-bit width is enough to have no drop in accuracy from floating point
version of the network. Additionally, significant memory footprint reduction
was achieved (from 85% up to 93%).
| null |
http://arxiv.org/abs/1805.10796v1
|
http://arxiv.org/pdf/1805.10796v1.pdf
| null |
[
"Krzysztof Wróbel",
"Marcin Pietroń",
"Maciej Wielgosz",
"Michał Karwatowski",
"Kazimierz Wiatr"
] |
[
"Neural Network Compression",
"Quantization",
"Sentiment Analysis"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-discriminative-latent-space-for
|
1805.10795
| null | null |
Deep Discriminative Latent Space for Clustering
|
Clustering is one of the most fundamental tasks in data analysis and machine
learning. It is central to many data-driven applications that aim to separate
the data into groups with similar patterns. Moreover, clustering is a complex
procedure that is affected significantly by the choice of the data
representation method. Recent research has demonstrated encouraging clustering
results by learning effectively these representations. In most of these works a
deep auto-encoder is initially pre-trained to minimize a reconstruction loss,
and then jointly optimized with clustering centroids in order to improve the
clustering objective. Those works focus mainly on the clustering phase of the
procedure, while not utilizing the potential benefit out of the initial phase.
In this paper we propose to optimize an auto-encoder with respect to a
discriminative pairwise loss function during the auto-encoder pre-training
phase. We demonstrate the high accuracy obtained by the proposed method as well
as its rapid convergence (e.g. reaching above 92% accuracy on MNIST during the
pre-training phase, in less than 50 epochs), even with small networks.
|
In most of these works a deep auto-encoder is initially pre-trained to minimize a reconstruction loss, and then jointly optimized with clustering centroids in order to improve the clustering objective.
|
http://arxiv.org/abs/1805.10795v1
|
http://arxiv.org/pdf/1805.10795v1.pdf
| null |
[
"Elad Tzoreff",
"Olga Kogan",
"Yoni Choukroun"
] |
[
"Clustering"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-ct-to-mr-synthesis-using-paired-and
|
1805.10790
| null | null |
Deep CT to MR Synthesis using Paired and Unpaired Data
|
MR imaging will play a very important role in radiotherapy treatment planning
for segmentation of tumor volumes and organs. However, the use of MR-based
radiotherapy is limited because of the high cost and the increased use of metal
implants such as cardiac pacemakers and artificial joints in aging society. To
improve the accuracy of CT-based radiotherapy planning, we propose a synthetic
approach that translates a CT image into an MR image using paired and unpaired
training data. In contrast to the current synthetic methods for medical images,
which depend on sparse pairwise-aligned data or plentiful unpaired data, the
proposed approach alleviates the rigid registration challenge of paired
training and overcomes the context-misalignment problem of the unpaired
training. A generative adversarial network was trained to transform 2D brain CT
image slices into 2D brain MR image slices, combining adversarial loss, dual
cycle-consistent loss, and voxel-wise loss. The experiments were analyzed using
CT and MR images of 202 patients. Qualitative and quantitative comparisons
against independent paired training and unpaired training methods demonstrate
the superiority of our approach.
|
To improve the accuracy of CT-based radiotherapy planning, we propose a synthetic approach that translates a CT image into an MR image using paired and unpaired training data.
|
http://arxiv.org/abs/1805.10790v2
|
http://arxiv.org/pdf/1805.10790v2.pdf
| null |
[
"Cheng-Bin Jin",
"Hakil Kim",
"Wonmo Jung",
"Seongsu Joo",
"Ensik Park",
"Ahn Young Saem",
"In Ho Han",
"Jae Il Lee",
"Xuenan Cui"
] |
[
"Generative Adversarial Network"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dnn-or-k-nn-that-is-the-generalize-vs
|
1805.06822
| null | null |
DNN or k-NN: That is the Generalize vs. Memorize Question
|
This paper studies the relationship between the classification performed by
deep neural networks (DNNs) and the decision of various classical classifiers,
namely k-nearest neighbours (k-NN), support vector machines (SVM) and logistic
regression (LR), at various layers of the network. This comparison provides us
with new insights as to the ability of neural networks to both memorize the
training data and generalize to new data at the same time, where k-NN serves as
the ideal estimator that perfectly memorizes the data. We show that
memorization of non-generalizing networks happens only at the last layers.
Moreover, the behavior of DNNs compared to the linear classifiers SVM and LR is
quite the same on the training and test data regardless of whether the network
generalizes. On the other hand, the similarity to k-NN holds only at the
absence of overfitting. Our results suggests that k-NN behavior of the network
on new data is a sign of generalization. Moreover, it shows that memorization
and generalization, which are traditionally considered to be contradicting to
each other, are compatible and complementary.
| null |
http://arxiv.org/abs/1805.06822v6
|
http://arxiv.org/pdf/1805.06822v6.pdf
| null |
[
"Gilad Cohen",
"Guillermo Sapiro",
"Raja Giryes"
] |
[
"Memorization"
] | 2018-05-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**$k$-Nearest Neighbors** is a clustering-based algorithm for classification and regression. It is a a type of instance-based learning as it does not attempt to construct a general internal model, but simply stores instances of the training data. Prediction is computed from a simple majority vote of the nearest neighbors of each point: a query point is assigned the data class which has the most representatives within the nearest neighbors of the point.\r\n\r\nSource of Description and Image: [scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html#classification)",
"full_name": "k-Nearest Neighbors",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "k-NN",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/genattack-practical-black-box-attacks-with
|
1805.11090
| null | null |
GenAttack: Practical Black-box Attacks with Gradient-Free Optimization
|
Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimation. We introduce GenAttack, a gradient-free optimization technique that uses genetic algorithms for synthesizing adversarial examples in the black-box setting. Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches. Against MNIST and CIFAR-10 models, GenAttack required roughly 2,126 and 2,568 times fewer queries respectively, than ZOO, the prior state-of-the-art black-box attack. In order to scale up the attack to large-scale high-dimensional ImageNet models, we perform a series of optimizations that further improve the query efficiency of our attack leading to 237 times fewer queries against the Inception-v3 model than ZOO. Furthermore, we show that GenAttack can successfully attack some state-of-the-art ImageNet defenses, including ensemble adversarial training and non-differentiable or randomized input transformations. Our results suggest that evolutionary algorithms open up a promising area of research into effective black-box attacks.
|
Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.
|
https://arxiv.org/abs/1805.11090v3
|
https://arxiv.org/pdf/1805.11090v3.pdf
| null |
[
"Moustafa Alzantot",
"Yash Sharma",
"Supriyo Chakraborty",
"huan zhang",
"Cho-Jui Hsieh",
"Mani Srivastava"
] |
[
"Adversarial Attack",
"Adversarial Robustness",
"Evolutionary Algorithms"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Auxiliary Classifiers** are type of architectural component that seek to improve the convergence of very deep networks. They are classifier heads we attach to layers before the end of the network. The motivation is to push useful gradients to the lower layers to make them immediately useful and improve the convergence during training by combatting the vanishing gradient problem. They are notably used in the Inception family of convolutional neural networks.",
"full_name": "Auxiliary Classifier",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "The following is a list of miscellaneous components used in neural networks.",
"name": "Miscellaneous Components",
"parent": null
},
"name": "Auxiliary Classifier",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/fd8e2064e094f301d910b91a757b860aae3e3116/torch/optim/rmsprop.py#L69-L108",
"description": "**RMSProp** is an unpublished adaptive learning rate optimizer [proposed by Geoff Hinton](http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf). The motivation is that the magnitude of gradients can differ for different weights, and can change during learning, making it hard to choose a single global learning rate. RMSProp tackles this by keeping a moving average of the squared gradient and adjusting the weight updates by this magnitude. The gradient updates are performed as:\r\n\r\n$$E\\left[g^{2}\\right]\\_{t} = \\gamma E\\left[g^{2}\\right]\\_{t-1} + \\left(1 - \\gamma\\right) g^{2}\\_{t}$$\r\n\r\n$$\\theta\\_{t+1} = \\theta\\_{t} - \\frac{\\eta}{\\sqrt{E\\left[g^{2}\\right]\\_{t} + \\epsilon}}g\\_{t}$$\r\n\r\nHinton suggests $\\gamma=0.9$, with a good default for $\\eta$ as $0.001$.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)",
"full_name": "RMSProp",
"introduced_year": 2013,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "RMSProp",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L210",
"description": "**Inception-v3 Module** is an image block used in the [Inception-v3](https://paperswithcode.com/method/inception-v3) architecture. This architecture is used on the coarsest (8 × 8) grids to promote high dimensional representations.",
"full_name": "Inception-v3 Module",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Inception-v3 Module",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/6db1569c89094cf23f3bc41f79275c45e9fcb3f3/torchvision/models/inception.py#L64",
"description": "**Inception-v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an auxiliary classifer to propagate label information lower down the network (along with the use of [batch normalization](https://paperswithcode.com/method/batch-normalization) for layers in the sidehead).",
"full_name": "Inception-v3",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Inception-v3",
"source_title": "Rethinking the Inception Architecture for Computer Vision",
"source_url": "http://arxiv.org/abs/1512.00567v3"
}
] |
https://paperswithcode.com/paper/adaptive-scaling-for-sparse-detection-in
|
1805.00250
| null | null |
Adaptive Scaling for Sparse Detection in Information Extraction
|
This paper focuses on detection tasks in information extraction, where
positive instances are sparsely distributed and models are usually evaluated
using F-measure on positive classes. These characteristics often result in
deficient performance of neural network based detection models. In this paper,
we propose adaptive scaling, an algorithm which can handle the positive
sparsity problem and directly optimize over F-measure via dynamic
cost-sensitive learning. To this end, we borrow the idea of marginal utility
from economics and propose a theoretical framework for instance importance
measuring without introducing any additional hyper-parameters. Experiments show
that our algorithm leads to a more effective and stable training of neural
network based detection models.
|
This paper focuses on detection tasks in information extraction, where positive instances are sparsely distributed and models are usually evaluated using F-measure on positive classes.
|
http://arxiv.org/abs/1805.00250v2
|
http://arxiv.org/pdf/1805.00250v2.pdf
|
ACL 2018 7
|
[
"Hongyu Lin",
"Yaojie Lu",
"Xianpei Han",
"Le Sun"
] |
[] | 2018-05-01T00:00:00 |
https://aclanthology.org/P18-1095
|
https://aclanthology.org/P18-1095.pdf
|
adaptive-scaling-for-sparse-detection-in-1
| null |
[] |
https://paperswithcode.com/paper/keep-and-learn-continual-learning-by
|
1805.10784
| null | null |
Keep and Learn: Continual Learning by Constraining the Latent Space for Knowledge Preservation in Neural Networks
|
Data is one of the most important factors in machine learning. However, even
if we have high-quality data, there is a situation in which access to the data
is restricted. For example, access to the medical data from outside is strictly
limited due to the privacy issues. In this case, we have to learn a model
sequentially only with the data accessible in the corresponding stage. In this
work, we propose a new method for preserving learned knowledge by modeling the
high-level feature space and the output space to be mutually informative, and
constraining feature vectors to lie in the modeled space during training. The
proposed method is easy to implement as it can be applied by simply adding a
reconstruction loss to an objective function. We evaluate the proposed method
on CIFAR-10/100 and a chest X-ray dataset, and show benefits in terms of
knowledge preservation compared to previous approaches.
| null |
http://arxiv.org/abs/1805.10784v1
|
http://arxiv.org/pdf/1805.10784v1.pdf
| null |
[
"Hyo-Eun Kim",
"SeungWook Kim",
"Jaehwan Lee"
] |
[
"Continual Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/strength-factors-an-uncertainty-system-for-a
|
1705.10726
| null | null |
Strength Factors: An Uncertainty System for a Quantified Modal Logic
|
We present a new system S for handling uncertainty in a quantified modal
logic (first-order modal logic). The system is based on both probability theory
and proof theory. The system is derived from Chisholm's epistemology. We
concretize Chisholm's system by grounding his undefined and primitive (i.e.
foundational) concept of reasonablenes in probability and proof theory. S can
be useful in systems that have to interact with humans and provide
justifications for their uncertainty. As a demonstration of the system, we
apply the system to provide a solution to the lottery paradox. Another
advantage of the system is that it can be used to provide uncertainty values
for counterfactual statements. Counterfactuals are statements that an agent
knows for sure are false. Among other cases, counterfactuals are useful when
systems have to explain their actions to users. Uncertainties for
counterfactuals fall out naturally from our system.
Efficient reasoning in just simple first-order logic is a hard problem.
Resolution-based first-order reasoning systems have made significant progress
over the last several decades in building systems that have solved non-trivial
tasks (even unsolved conjectures in mathematics). We present a sketch of a
novel algorithm for reasoning that extends first-order resolution.
Finally, while there have been many systems of uncertainty for propositional
logics, first-order logics and propositional modal logics, there has been very
little work in building systems of uncertainty for first-order modal logics.
The work described below is in progress; and once finished will address this
lack.
| null |
http://arxiv.org/abs/1705.10726v2
|
http://arxiv.org/pdf/1705.10726v2.pdf
| null |
[
"Naveen Sundar Govindarajulu",
"Selmer Bringsjord"
] |
[
"counterfactual"
] | 2017-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/object-level-representation-learning-for-few
|
1805.10777
| null | null |
Object-Level Representation Learning for Few-Shot Image Classification
|
Few-shot learning that trains image classifiers over few labeled examples per
category is a challenging task. In this paper, we propose to exploit an
additional big dataset with different categories to improve the accuracy of
few-shot learning over our target dataset. Our approach is based on the
observation that images can be decomposed into objects, which may appear in
images from both the additional dataset and our target dataset. We use the
object-level relation learned from the additional dataset to infer the
similarity of images in our target dataset with unseen categories. Nearest
neighbor search is applied to do image classification, which is a
non-parametric model and thus does not need fine-tuning. We evaluate our
algorithm on two popular datasets, namely Omniglot and MiniImagenet. We obtain
8.5\% and 2.7\% absolute improvements for 5-way 1-shot and 5-way 5-shot
experiments on MiniImagenet, respectively. Source code will be published upon
acceptance.
| null |
http://arxiv.org/abs/1805.10777v1
|
http://arxiv.org/pdf/1805.10777v1.pdf
| null |
[
"Liangqu Long",
"Wei Wang",
"Jun Wen",
"Meihui Zhang",
"Qian Lin",
"Beng Chin Ooi"
] |
[
"Classification",
"Few-Shot Image Classification",
"Few-Shot Learning",
"General Classification",
"image-classification",
"Image Classification",
"Representation Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/universality-of-deep-convolutional-neural
|
1805.10769
| null | null |
Universality of Deep Convolutional Neural Networks
|
Deep learning has been widely applied and brought breakthroughs in speech
recognition, computer vision, and many other domains. The involved deep neural
network architectures and computational issues have been well studied in
machine learning. But there lacks a theoretical foundation for understanding
the approximation or generalization ability of deep learning methods generated
by the network architectures such as deep convolutional neural networks having
convolutional structures. Here we show that a deep convolutional neural network
(CNN) is universal, meaning that it can be used to approximate any continuous
function to an arbitrary accuracy when the depth of the neural network is large
enough. This answers an open question in learning theory. Our quantitative
estimate, given tightly in terms of the number of free parameters to be
computed, verifies the efficiency of deep CNNs in dealing with large
dimensional data. Our study also demonstrates the role of convolutions in deep
CNNs.
| null |
http://arxiv.org/abs/1805.10769v2
|
http://arxiv.org/pdf/1805.10769v2.pdf
| null |
[
"Ding-Xuan Zhou"
] |
[
"Learning Theory",
"Open-Ended Question Answering",
"speech-recognition",
"Speech Recognition"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/registration-and-fusion-of-multi-spectral
|
1711.01543
| null | null |
Registration and Fusion of Multi-Spectral Images Using a Novel Edge Descriptor
|
In this paper we introduce a fully end-to-end approach for multi-spectral
image registration and fusion. Our method for fusion combines images from
different spectral channels into a single fused image by different approaches
for low and high frequency signals. A prerequisite of fusion is a stage of
geometric alignment between the spectral bands, commonly referred to as
registration. Unfortunately, common methods for image registration of a single
spectral channel do not yield reasonable results on images from different
modalities. For that end, we introduce a new algorithm for multi-spectral image
registration, based on a novel edge descriptor of feature points. Our method
achieves an accurate alignment of a level that allows us to further fuse the
images. As our experiments show, we produce a high quality of multi-spectral
image registration and fusion under many challenging scenarios.
| null |
http://arxiv.org/abs/1711.01543v5
|
http://arxiv.org/pdf/1711.01543v5.pdf
| null |
[
"Nati Ofir",
"Shai Silberstein",
"Dani Rozenbaum",
"Yosi Keller",
"Sharon Duvdevani Bar"
] |
[
"Image Registration"
] | 2017-11-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/memory-augmented-neural-networks-for-1
|
1805.10768
| null | null |
Deep Trustworthy Knowledge Tracing
|
Knowledge tracing (KT), a key component of an intelligent tutoring system, is a machine learning technique that estimates the mastery level of a student based on his/her past performance. The objective of KT is to predict a student's response to the next question. Compared with traditional KT models, deep learning-based KT (DLKT) models show better predictive performance because of the representation power of deep neural networks. Various methods have been proposed to improve the performance of DLKT, but few studies have been conducted on the reliability of DLKT. In this work, we claim that the existing DLKTs are not reliable in real education environments. To substantiate the claim, we show limitations of DLKT from various perspectives such as knowledge state update failure, catastrophic forgetting, and non-interpretability. We then propose a novel regularization to address these problems. The proposed method allows us to achieve trustworthy DLKT. In addition, the proposed model which is trained on scenarios with forgetting can also be easily extended to scenarios without forgetting.
| null |
https://arxiv.org/abs/1805.10768v3
|
https://arxiv.org/pdf/1805.10768v3.pdf
| null |
[
"Heonseok Ha",
"Uiwon Hwang",
"Yongjun Hong",
"Jahee Jang",
"Sungroh Yoon"
] |
[
"Knowledge Tracing"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/understanding-generalization-and-optimization
|
1805.10767
| null | null |
Understanding Generalization and Optimization Performance of Deep CNNs
|
This work aims to provide understandings on the remarkable success of deep
convolutional neural networks (CNNs) by theoretically analyzing their
generalization performance and establishing optimization guarantees for
gradient descent based training algorithms. Specifically, for a CNN model
consisting of $l$ convolutional layers and one fully connected layer, we prove
that its generalization error is bounded by
$\mathcal{O}(\sqrt{\dt\widetilde{\varrho}/n})$ where $\theta$ denotes freedom
degree of the network parameters and
$\widetilde{\varrho}=\mathcal{O}(\log(\prod_{i=1}^{l}\rwi{i}
(\ki{i}-\si{i}+1)/p)+\log(\rf))$ encapsulates architecture parameters including
the kernel size $\ki{i}$, stride $\si{i}$, pooling size $p$ and parameter
magnitude $\rwi{i}$. To our best knowledge, this is the first generalization
bound that only depends on $\mathcal{O}(\log(\prod_{i=1}^{l+1}\rwi{i}))$,
tighter than existing ones that all involve an exponential term like
$\mathcal{O}(\prod_{i=1}^{l+1}\rwi{i})$. Besides, we prove that for an
arbitrary gradient descent algorithm, the computed approximate stationary point
by minimizing empirical risk is also an approximate stationary point to the
population risk. This well explains why gradient descent training algorithms
usually perform sufficiently well in practice. Furthermore, we prove the
one-to-one correspondence and convergence guarantees for the non-degenerate
stationary points between the empirical and population risks. It implies that
the computed local minimum for the empirical risk is also close to a local
minimum for the population risk, thus ensuring the good generalization
performance of CNNs.
| null |
http://arxiv.org/abs/1805.10767v1
|
http://arxiv.org/pdf/1805.10767v1.pdf
|
ICML 2018 7
|
[
"Pan Zhou",
"Jiashi Feng"
] |
[] | 2018-05-28T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=1932
|
http://proceedings.mlr.press/v80/zhou18a/zhou18a.pdf
|
understanding-generalization-and-optimization-1
| null |
[] |
https://paperswithcode.com/paper/improving-the-resolution-of-cnn-feature-maps
|
1805.10766
| null | null |
Improving the Resolution of CNN Feature Maps Efficiently with Multisampling
|
We describe a new class of subsampling techniques for CNNs, termed multisampling, that significantly increases the amount of information kept by feature maps through subsampling layers. One version of our method, which we call checkered subsampling, significantly improves the accuracy of state-of-the-art architectures such as DenseNet and ResNet without any additional parameters and, remarkably, improves the accuracy of certain pretrained ImageNet models without any training or fine-tuning. We glean possible insight into the nature of data augmentations and demonstrate experimentally that coarse feature maps are bottlenecking the performance of neural networks in image classification.
|
We describe a new class of subsampling techniques for CNNs, termed multisampling, that significantly increases the amount of information kept by feature maps through subsampling layers.
|
https://arxiv.org/abs/1805.10766v2
|
https://arxiv.org/pdf/1805.10766v2.pdf
| null |
[
"Shayan Sadigh",
"Pradeep Sen"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/densenet.py#L93",
"description": "A **Dense Block** is a module used in convolutional neural networks that connects *all layers* (with matching feature-map sizes) directly with each other. It was originally proposed as part of the [DenseNet](https://paperswithcode.com/method/densenet) architecture. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. In contrast to [ResNets](https://paperswithcode.com/method/resnet), we never combine features through summation before they are passed into a layer; instead, we combine features by concatenating them. Hence, the $\\ell^{th}$ layer has $\\ell$ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all $L-\\ell$ subsequent layers. This introduces $\\frac{L(L+1)}{2}$ connections in an $L$-layer network, instead of just $L$, as in traditional architectures: \"dense connectivity\".",
"full_name": "Dense Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Dense Block",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, XRP has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a XRP transaction not confirmed, your XRP wallet not showing balance, or you're trying to recover a lost XRP wallet, knowing where to get help is essential. That’s why the XRP customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the XRP Customer Support Number +1-833-534-1729\r\nXRP operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. XRP Transaction Not Confirmed\r\nOne of the most common concerns is when a XRP transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. XRP Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A XRP wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost XRP Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost XRP wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. XRP Deposit Not Received\r\nIf someone has sent you XRP but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A XRP deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. XRP Transaction Stuck or Pending\r\nSometimes your XRP transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. XRP Wallet Recovery Phrase Issue\r\nYour 12 or 24-word XRP wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the XRP Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and XRP tech.\r\n\r\n24/7 Availability: XRP doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About XRP Support and Wallet Issues\r\nQ1: Can XRP support help me recover stolen BTC?\r\nA: While XRP transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: XRP transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not XRP’s official number (XRP is decentralized), it connects you to trained professionals experienced in resolving all major XRP issues.\r\n\r\nFinal Thoughts\r\nXRP is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a XRP transaction not confirmed, your XRP wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the XRP customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "XRP Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "XRP Customer Service Number +1-833-534-1729",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/maximum-causal-tsallis-entropy-imitation
|
1805.08336
| null | null |
Maximum Causal Tsallis Entropy Imitation Learning
|
In this paper, we propose a novel maximum causal Tsallis entropy (MCTE)
framework for imitation learning which can efficiently learn a sparse
multi-modal policy distribution from demonstrations. We provide the full
mathematical analysis of the proposed framework. First, the optimal solution of
an MCTE problem is shown to be a sparsemax distribution, whose supporting set
can be adjusted. The proposed method has advantages over a softmax distribution
in that it can exclude unnecessary actions by assigning zero probability.
Second, we prove that an MCTE problem is equivalent to robust Bayes estimation
in the sense of the Brier score. Third, we propose a maximum causal Tsallis
entropy imitation learning (MCTEIL) algorithm with a sparse mixture density
network (sparse MDN) by modeling mixture weights using a sparsemax
distribution. In particular, we show that the causal Tsallis entropy of an MDN
encourages exploration and efficient mixture utilization while Boltzmann Gibbs
entropy is less effective. We validate the proposed method in two simulation
studies and MCTEIL outperforms existing imitation learning methods in terms of
average returns and learning multi-modal policies.
| null |
http://arxiv.org/abs/1805.08336v2
|
http://arxiv.org/pdf/1805.08336v2.pdf
|
NeurIPS 2018 12
|
[
"Kyungjae Lee",
"Sungjoon Choi",
"Songhwai Oh"
] |
[
"Imitation Learning"
] | 2018-05-22T00:00:00 |
http://papers.nips.cc/paper/7693-maximum-causal-tsallis-entropy-imitation-learning
|
http://papers.nips.cc/paper/7693-maximum-causal-tsallis-entropy-imitation-learning.pdf
|
maximum-causal-tsallis-entropy-imitation-1
| null |
[
{
"code_snippet_url": "https://github.com/vene/sparse-structured-attention/blob/e89a2162bdde3a86b7dfdba22e292ea3bd3880d3/pytorch/torchsparseattn/sparsemax.py#L47",
"description": "**Sparsemax** is a type of activation/output function similar to the traditional [softmax](https://paperswithcode.com/method/softmax), but able to output sparse probabilities. \r\n\r\n$$ \\text{sparsemax}\\left(z\\right) = \\arg\\_{p∈\\Delta^{K−1}}\\min||\\mathbf{p} - \\mathbf{z}||^{2} $$",
"full_name": "Sparsemax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Sparsemax",
"source_title": "From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification",
"source_url": "http://arxiv.org/abs/1602.02068v2"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/clustering-by-latent-dimensions
|
1805.10759
| null | null |
Clustering by latent dimensions
|
This paper introduces a new clustering technique, called {\em dimensional
clustering}, which clusters each data point by its latent {\em pointwise
dimension}, which is a measure of the dimensionality of the data set local to
that point. Pointwise dimension is invariant under a broad class of
transformations. As a result, dimensional clustering can be usefully applied to
a wide range of datasets. Concretely, we present a statistical model which
estimates the pointwise dimension of a dataset around the points in that
dataset using the distance of each point from its $n^{\text{th}}$ nearest
neighbor. We demonstrate the applicability of our technique to the analysis of
dynamical systems, images, and complex human movements.
| null |
http://arxiv.org/abs/1805.10759v1
|
http://arxiv.org/pdf/1805.10759v1.pdf
| null |
[
"Shohei Hidaka",
"Neeraj Kashyap"
] |
[
"Clustering"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dual-policy-iteration
|
1805.10755
| null | null |
Dual Policy Iteration
|
Recently, a novel class of Approximate Policy Iteration (API) algorithms have
demonstrated impressive practical performance (e.g., ExIt from [2],
AlphaGo-Zero from [27]). This new family of algorithms maintains, and
alternately optimizes, two policies: a fast, reactive policy (e.g., a deep
neural network) deployed at test time, and a slow, non-reactive policy (e.g.,
Tree Search), that can plan multiple steps ahead. The reactive policy is
updated under supervision from the non-reactive policy, while the non-reactive
policy is improved with guidance from the reactive policy. In this work we
study this Dual Policy Iteration (DPI) strategy in an alternating optimization
framework and provide a convergence analysis that extends existing API theory.
We also develop a special instance of this framework which reduces the update
of non-reactive policies to model-based optimal control using learned local
models, and provides a theoretically sound way of unifying model-free and
model-based RL approaches with unknown dynamics. We demonstrate the efficacy of
our approach on various continuous control Markov Decision Processes.
| null |
http://arxiv.org/abs/1805.10755v2
|
http://arxiv.org/pdf/1805.10755v2.pdf
|
NeurIPS 2018 12
|
[
"Wen Sun",
"Geoffrey J. Gordon",
"Byron Boots",
"J. Andrew Bagnell"
] |
[
"continuous-control",
"Continuous Control"
] | 2018-05-28T00:00:00 |
http://papers.nips.cc/paper/7937-dual-policy-iteration
|
http://papers.nips.cc/paper/7937-dual-policy-iteration.pdf
|
dual-policy-iteration-1
| null |
[] |
https://paperswithcode.com/paper/low-rank-tensor-completion-by-truncated
|
1712.00704
| null | null |
Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
|
Currently, low-rank tensor completion has gained cumulative attention in
recovering incomplete visual data whose partial elements are missing. By taking
a color image or video as a three-dimensional (3D) tensor, previous studies
have suggested several definitions of tensor nuclear norm. However, they have
limitations and may not properly approximate the real rank of a tensor.
Besides, they do not explicitly use the low-rank property in optimization. It
is proved that the recently proposed truncated nuclear norm (TNN) can replace
the traditional nuclear norm, as a better estimation to the rank of a matrix.
Thus, this paper presents a new method called the tensor truncated nuclear norm
(T-TNN), which proposes a new definition of tensor nuclear norm and extends the
truncated nuclear norm from the matrix case to the tensor case. Beneficial from
the low rankness of TNN, our approach improves the efficacy of tensor
completion. We exploit the previously proposed tensor singular value
decomposition and the alternating direction method of multipliers in
optimization. Extensive experiments on real-world videos and images demonstrate
that the performance of our approach is superior to those of existing methods.
|
Currently, low-rank tensor completion has gained cumulative attention in recovering incomplete visual data whose partial elements are missing.
|
http://arxiv.org/abs/1712.00704v5
|
http://arxiv.org/pdf/1712.00704v5.pdf
| null |
[
"Shengke Xue",
"Wenyuan Qiu",
"Fan Liu",
"Xinyu Jin"
] |
[] | 2017-12-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-adversarial-context-aware-landmark
|
1805.10737
| null | null |
Deep Adversarial Context-Aware Landmark Detection for Ultrasound Imaging
|
Real-time localization of prostate gland in trans-rectal ultrasound images is
a key technology that is required to automate the ultrasound guided prostate
biopsy procedures. In this paper, we propose a new deep learning based approach
which is aimed at localizing several prostate landmarks efficiently and
robustly. We propose a multitask learning approach primarily to make the
overall algorithm more contextually aware. In this approach, we not only
consider the explicit learning of landmark locations, but also build-in a
mechanism to learn the contour of the prostate. This multitask learning is
further coupled with an adversarial arm to promote the generation of feasible
structures. We have trained this network using ~4000 labeled trans-rectal
ultrasound images and tested on an independent set of images with ground truth
landmark locations. We have achieved an overall Dice score of 92.6% for the
adversarially trained multitask approach, which is significantly better than
the Dice score of 88.3% obtained by only learning of landmark locations. The
overall mean distance error using the adversarial multitask approach has also
improved by 20% while reducing the standard deviation of the error compared to
learning landmark locations only. In terms of computational complexity both
approaches can process the images in real-time using standard computer with a
standard CUDA enabled GPU.
| null |
http://arxiv.org/abs/1805.10737v1
|
http://arxiv.org/pdf/1805.10737v1.pdf
| null |
[
"Ahmet Tuysuzoglu",
"Jeremy Tan",
"Kareem Eissa",
"Atilla P. Kiraly",
"Mamadou Diallo",
"Ali Kamen"
] |
[
"GPU"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/doing-the-impossible-why-neural-networks-can
|
1805.04928
| null | null |
Doing the impossible: Why neural networks can be trained at all
|
As deep neural networks grow in size, from thousands to millions to billions
of weights, the performance of those networks becomes limited by our ability to
accurately train them. A common naive question arises: if we have a system with
billions of degrees of freedom, don't we also need billions of samples to train
it? Of course, the success of deep learning indicates that reliable models can
be learned with reasonable amounts of data. Similar questions arise in protein
folding, spin glasses and biological neural networks. With effectively infinite
potential folding/spin/wiring configurations, how does the system find the
precise arrangement that leads to useful and robust results? Simple sampling of
the possible configurations until an optimal one is reached is not a viable
option even if one waited for the age of the universe. On the contrary, there
appears to be a mechanism in the above phenomena that forces them to achieve
configurations that live on a low-dimensional manifold, avoiding the curse of
dimensionality. In the current work we use the concept of mutual information
between successive layers of a deep neural network to elucidate this mechanism
and suggest possible ways of exploiting it to accelerate training. We show that
adding structure to the neural network that enforces higher mutual information
between layers speeds training and leads to more accurate results. High mutual
information between layers implies that the effective number of free parameters
is exponentially smaller than the raw number of tunable weights.
| null |
http://arxiv.org/abs/1805.04928v2
|
http://arxiv.org/pdf/1805.04928v2.pdf
| null |
[
"Nathan O. Hodas",
"Panos Stinis"
] |
[
"All",
"Protein Folding"
] | 2018-05-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/perceive-your-users-in-depth-learning
|
1805.10727
| null | null |
Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks
|
Tasks such as search and recommendation have become increas- ingly important
for E-commerce to deal with the information over- load problem. To meet the
diverse needs of di erent users, person- alization plays an important role. In
many large portals such as Taobao and Amazon, there are a bunch of di erent
types of search and recommendation tasks operating simultaneously for person-
alization. However, most of current techniques address each task separately.
This is suboptimal as no information about users shared across di erent tasks.
In this work, we propose to learn universal user representations across
multiple tasks for more e ective personalization. In partic- ular, user
behavior sequences (e.g., click, bookmark or purchase of products) are modeled
by LSTM and attention mechanism by integrating all the corresponding content,
behavior and temporal information. User representations are shared and learned
in an end-to-end setting across multiple tasks. Bene ting from better
information utilization of multiple tasks, the user representations are more e
ective to re ect their interests and are more general to be transferred to new
tasks. We refer this work as Deep User Perception Network (DUPN) and conduct an
extensive set of o ine and online experiments. Across all tested ve di erent
tasks, our DUPN consistently achieves better results by giving more e ective
user representations. Moreover, we deploy DUPN in large scale operational tasks
in Taobao. Detailed implementations, e.g., incre- mental model updating, are
also provided to address the practical issues for the real world applications.
| null |
http://arxiv.org/abs/1805.10727v1
|
http://arxiv.org/pdf/1805.10727v1.pdf
| null |
[
"Yabo Ni",
"Dan Ou",
"Shichen Liu",
"Xiang Li",
"Wenwu Ou",
"An-Xiang Zeng",
"Luo Si"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/network-modeling-of-short-over-dispersed
|
1605.02869
| null | null |
An Efficient and Flexible Spike Train Model via Empirical Bayes
|
Accurate statistical models of neural spike responses can characterize the information carried by neural populations. But the limited samples of spike counts during recording usually result in model overfitting. Besides, current models assume spike counts to be Poisson-distributed, which ignores the fact that many neurons demonstrate over-dispersed spiking behaviour. Although the Negative Binomial Generalized Linear Model (NB-GLM) provides a powerful tool for modeling over-dispersed spike counts, the maximum likelihood-based standard NB-GLM leads to highly variable and inaccurate parameter estimates. Thus, we propose a hierarchical parametric empirical Bayes method to estimate the neural spike responses among neuronal population. Our method integrates both Generalized Linear Models (GLMs) and empirical Bayes theory, which aims to (1) improve the accuracy and reliability of parameter estimation, compared to the maximum likelihood-based method for NB-GLM and Poisson-GLM; (2) effectively capture the over-dispersion nature of spike counts from both simulated data and experimental data; and (3) provide insight into both neural interactions and spiking behaviours of the neuronal populations. We apply our approach to study both simulated data and experimental neural data. The estimation of simulation data indicates that the new framework can accurately predict mean spike counts simulated from different models and recover the connectivity weights among neural populations. The estimation based on retinal neurons demonstrate the proposed method outperforms both NB-GLM and Poisson-GLM in terms of the predictive log-likelihood of held-out data. Codes are available in https://doi.org/10.5281/zenodo.4704423
| null |
https://arxiv.org/abs/1605.02869v6
|
https://arxiv.org/pdf/1605.02869v6.pdf
| null |
[
"Qi She",
"Xiaoli Wu",
"Beth Jelfs",
"Adam S. Charles",
"Rosa H. M. Chan"
] |
[
"Bayesian Inference",
"parameter estimation"
] | 2016-05-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/significance-testing-in-non-sparse-high
|
1610.02122
| null | null |
Significance testing in non-sparse high-dimensional linear models
|
In high-dimensional linear models, the sparsity assumption is typically made,
stating that most of the parameters are equal to zero. Under the sparsity
assumption, estimation and, recently, inference have been well studied.
However, in practice, sparsity assumption is not checkable and more importantly
is often violated; a large number of covariates might be expected to be
associated with the response, indicating that possibly all, rather than just a
few, parameters are non-zero. A natural example is a genome-wide gene
expression profiling, where all genes are believed to affect a common disease
marker. We show that existing inferential methods are sensitive to the sparsity
assumption, and may, in turn, result in the severe lack of control of Type-I
error. In this article, we propose a new inferential method, named CorrT, which
is robust to model misspecification such as heteroscedasticity and lack of
sparsity. CorrT is shown to have Type I error approaching the nominal level for
\textit{any} models and Type II error approaching zero for sparse and many
dense models.
In fact, CorrT is also shown to be optimal in a variety of frameworks:
sparse, non-sparse and hybrid models where sparse and dense signals are mixed.
Numerical experiments show a favorable performance of the CorrT test compared
to the state-of-the-art methods.
| null |
http://arxiv.org/abs/1610.02122v4
|
http://arxiv.org/pdf/1610.02122v4.pdf
| null |
[
"Yinchu Zhu",
"Jelena Bradic"
] |
[
"Vocal Bursts Intensity Prediction"
] | 2016-10-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/designing-for-democratization-introducing
|
1805.10723
| null | null |
Designing for Democratization: Introducing Novices to Artificial Intelligence Via Maker Kits
|
Existing research highlight the myriad of benefits realized when technology
is sufficiently democratized and made accessible to non-technical or novice
users. However, democratizing complex technologies such as artificial
intelligence (AI) remains hard. In this work, we draw on theoretical
underpinnings from the democratization of innovation, in exploring the design
of maker kits that help introduce novice users to complex technologies. We
report on our work designing TJBot: an open source cardboard robot that can be
programmed using pre-built AI services. We highlight principles we adopted in
this process (approachable design, simplicity, extensibility and
accessibility), insights we learned from showing the kit at workshops (66
participants) and how users interacted with the project on GitHub over a
12-month period (Nov 2016 - Nov 2017). We find that the project succeeds in
attracting novice users (40% of users who forked the project are new to GitHub)
and a variety of demographics are interested in prototyping use cases such as
home automation, task delegation, teaching and learning.
| null |
http://arxiv.org/abs/1805.10723v3
|
http://arxiv.org/pdf/1805.10723v3.pdf
| null |
[
"Victor Dibia",
"Aaron Cox",
"Justin Weisz"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/object-region-mining-with-adversarial-erasing
|
1703.08448
| null | null |
Object Region Mining with Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach
|
We investigate a principle way to progressively mine discriminative object
regions using classification networks to address the weakly-supervised semantic
segmentation problems. Classification networks are only responsive to small and
sparse discriminative regions from the object of interest, which deviates from
the requirement of the segmentation task that needs to localize dense, interior
and integral regions for pixel-wise inference. To mitigate this gap, we propose
a new adversarial erasing approach for localizing and expanding object regions
progressively. Starting with a single small object region, our proposed
approach drives the classification network to sequentially discover new and
complement object regions by erasing the current mined regions in an
adversarial manner. These localized regions eventually constitute a dense and
complete object region for learning semantic segmentation. To further enhance
the quality of the discovered regions by adversarial erasing, an online
prohibitive segmentation learning approach is developed to collaborate with
adversarial erasing by providing auxiliary segmentation supervision modulated
by the more reliable classification scores. Despite its apparent simplicity,
the proposed approach achieves 55.0% and 55.7% mean Intersection-over-Union
(mIoU) scores on PASCAL VOC 2012 val and test sets, which are the new
state-of-the-arts.
| null |
http://arxiv.org/abs/1703.08448v3
|
http://arxiv.org/pdf/1703.08448v3.pdf
|
CVPR 2017 7
|
[
"Yunchao Wei",
"Jiashi Feng",
"Xiaodan Liang",
"Ming-Ming Cheng",
"Yao Zhao",
"Shuicheng Yan"
] |
[
"Classification",
"General Classification",
"Object",
"Segmentation",
"Semantic Segmentation",
"Weakly supervised Semantic Segmentation",
"Weakly-Supervised Semantic Segmentation"
] | 2017-03-24T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2017/html/Wei_Object_Region_Mining_CVPR_2017_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2017/papers/Wei_Object_Region_Mining_CVPR_2017_paper.pdf
|
object-region-mining-with-adversarial-erasing-1
| null |
[] |
https://paperswithcode.com/paper/revisiting-dilated-convolution-a-simple-1
|
1805.04574
| null | null |
Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi- Supervised Semantic Segmentation
|
Despite the remarkable progress, weakly supervised segmentation approaches
are still inferior to their fully supervised counterparts. We obverse the
performance gap mainly comes from their limitation on learning to produce
high-quality dense object localization maps from image-level supervision. To
mitigate such a gap, we revisit the dilated convolution [1] and reveal how it
can be utilized in a novel way to effectively overcome this critical limitation
of weakly supervised segmentation approaches. Specifically, we find that
varying dilation rates can effectively enlarge the receptive fields of
convolutional kernels and more importantly transfer the surrounding
discriminative information to non-discriminative object regions, promoting the
emergence of these regions in the object localization maps. Then, we design a
generic classification network equipped with convolutional blocks of different
dilated rates. It can produce dense and reliable object localization maps and
effectively benefit both weakly- and semi- supervised semantic segmentation.
Despite the apparent simplicity, our proposed approach obtains superior
performance over state-of-the-arts. In particular, it achieves 60.8% and 67.6%
mIoU scores on Pascal VOC 2012 test set in weakly- (only image-level labels are
available) and semi- (1,464 segmentation masks are available) supervised
settings, which are the new state-of-the-arts.
| null |
http://arxiv.org/abs/1805.04574v2
|
http://arxiv.org/pdf/1805.04574v2.pdf
|
CVPR 2018
|
[
"Yunchao Wei",
"Huaxin Xiao",
"Honghui Shi",
"Zequn Jie",
"Jiashi Feng",
"Thomas S. Huang"
] |
[
"Object",
"Object Localization",
"Segmentation",
"Semantic Segmentation",
"Semi-Supervised Semantic Segmentation",
"Weakly supervised segmentation"
] | 2018-05-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/ecb88c5d11895a68e5f20917d27a0debbc0f0697/torch/nn/modules/conv.py#L260",
"description": "**Dilated Convolutions** are a type of [convolution](https://paperswithcode.com/method/convolution) that “inflate” the kernel by inserting holes between the kernel elements. An additional parameter $l$ (dilation rate) indicates how much the kernel is widened. There are usually $l-1$ spaces inserted between kernel elements. \r\n\r\nNote that concept has existed in past literature under different names, for instance the *algorithme a trous*, an algorithm for wavelet decomposition (Holschneider et al., 1987; Shensa, 1992).",
"full_name": "Dilated Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Dilated Convolution",
"source_title": "Multi-Scale Context Aggregation by Dilated Convolutions",
"source_url": "http://arxiv.org/abs/1511.07122v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/multi-region-segmentation-of-bladder-cancer
|
1805.10720
| null | null |
Multi-region segmentation of bladder cancer structures in MRI with progressive dilated convolutional networks
|
Precise segmentation of bladder walls and tumor regions is an essential step
towards non-invasive identification of tumor stage and grade, which is critical
for treatment decision and prognosis of patients with bladder cancer (BC).
However, the automatic delineation of bladder walls and tumor in magnetic
resonance images (MRI) is a challenging task, due to important bladder shape
variations, strong intensity inhomogeneity in urine and very high variability
across population, particularly on tumors appearance. To tackle these issues,
we propose to use a deep fully convolutional neural network. The proposed
network includes dilated convolutions to increase the receptive field without
incurring extra cost nor degrading its performance. Furthermore, we introduce
progressive dilations in each convolutional block, thereby enabling extensive
receptive fields without the need for large dilation rates. The proposed
network is evaluated on 3.0T T2-weighted MRI scans from 60 pathologically
confirmed patients with BC. Experiments shows the proposed model to achieve
high accuracy, with a mean Dice similarity coefficient of 0.98, 0.84 and 0.69
for inner wall, outer wall and tumor region, respectively. These results
represent a very good agreement with reference contours and an increase in
performance compared to existing methods. In addition, inference times are less
than a second for a whole 3D volume, which is between 2-3 orders of magnitude
faster than related state-of-the-art methods for this application. We showed
that a CNN can yield precise segmentation of bladder walls and tumors in
bladder cancer patients on MRI. The whole segmentation process is
fully-automatic and yields results in very good agreement with the reference
standard, demonstrating the viability of deep learning models for the automatic
multi-region segmentation of bladder cancer MRI images.
| null |
http://arxiv.org/abs/1805.10720v4
|
http://arxiv.org/pdf/1805.10720v4.pdf
| null |
[
"Jose Dolz",
"Xiaopan Xu",
"Jerome Rony",
"Jing Yuan",
"Yang Liu",
"Eric Granger",
"Christian Desrosiers",
"Xi Zhang",
"Ismail Ben Ayed",
"Hongbing Lu"
] |
[
"Prognosis",
"Segmentation"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/high-quality-bidirectional-generative
|
1805.10717
| null | null |
Discriminator Feature-based Inference by Recycling the Discriminator of GANs
|
Generative adversarial networks (GANs)successfully generate high quality data by learning amapping from a latent vector to the data. Various studies assert that the latent space of a GAN is semanticallymeaningful and can be utilized for advanced data analysis and manipulation. To analyze the real data in thelatent space of a GAN, it is necessary to build an inference mapping from the data to the latent vector. Thispaper proposes an effective algorithm to accurately infer the latent vector by utilizing GAN discriminator features. Our primary goal is to increase inference mappingaccuracy with minimal training overhead. Furthermore,using the proposed algorithm, we suggest a conditionalimage generation algorithm, namely a spatially conditioned GAN. Extensive evaluations confirmed that theproposed inference algorithm achieved more semantically accurate inference mapping than existing methodsand can be successfully applied to advanced conditionalimage generation tasks.
| null |
https://arxiv.org/abs/1805.10717v2
|
https://arxiv.org/pdf/1805.10717v2.pdf
| null |
[
"Duhyeon Bang",
"Seoungyoon Kang",
"Hyunjung Shim"
] |
[] | 2018-05-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/reinforcement-and-imitation-learning-for
|
1802.09564
| null |
HJWGdbbCW
|
Reinforcement and Imitation Learning for Diverse Visuomotor Skills
|
We propose a model-free deep reinforcement learning method that leverages a
small amount of demonstration data to assist a reinforcement learning agent. We
apply this approach to robotic manipulation tasks and train end-to-end
visuomotor policies that map directly from RGB camera inputs to joint
velocities. We demonstrate that our approach can solve a wide variety of
visuomotor tasks, for which engineering a scripted controller would be
laborious. In experiments, our reinforcement and imitation agent achieves
significantly better performances than agents trained with reinforcement
learning or imitation learning alone. We also illustrate that these policies,
trained with large visual and dynamics variations, can achieve preliminary
successes in zero-shot sim2real transfer. A brief visual description of this
work can be viewed in https://youtu.be/EDl8SQUNjj0
|
We propose a model-free deep reinforcement learning method that leverages a small amount of demonstration data to assist a reinforcement learning agent.
|
http://arxiv.org/abs/1802.09564v2
|
http://arxiv.org/pdf/1802.09564v2.pdf
|
ICLR 2018 1
|
[
"Yuke Zhu",
"Ziyu Wang",
"Josh Merel",
"Andrei Rusu",
"Tom Erez",
"Serkan Cabi",
"Saran Tunyasuvunakool",
"János Kramár",
"Raia Hadsell",
"Nando de Freitas",
"Nicolas Heess"
] |
[
"Deep Reinforcement Learning",
"Imitation Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-02-26T00:00:00 |
https://openreview.net/forum?id=HJWGdbbCW
|
https://openreview.net/pdf?id=HJWGdbbCW
|
reinforcement-and-imitation-learning-for-1
| null |
[] |
https://paperswithcode.com/paper/synergistic-reconstruction-and-synthesis-via
|
1805.10704
| null | null |
Synergistic Reconstruction and Synthesis via Generative Adversarial Networks for Accelerated Multi-Contrast MRI
|
Multi-contrast MRI acquisitions of an anatomy enrich the magnitude of
information available for diagnosis. Yet, excessive scan times associated with
additional contrasts may be a limiting factor. Two mainstream approaches for
enhanced scan efficiency are reconstruction of undersampled acquisitions and
synthesis of missing acquisitions. In reconstruction, performance decreases
towards higher acceleration factors with diminished sampling density
particularly at high-spatial-frequencies. In synthesis, the absence of data
samples from the target contrast can lead to artefactual sensitivity or
insensitivity to image features. Here we propose a new approach for synergistic
reconstruction-synthesis of multi-contrast MRI based on conditional generative
adversarial networks. The proposed method preserves high-frequency details of
the target contrast by relying on the shared high-frequency information
available from the source contrast, and prevents feature leakage or loss by
relying on the undersampled acquisitions of the target contrast. Demonstrations
on brain MRI datasets from healthy subjects and patients indicate the superior
performance of the proposed method compared to previous state-of-the-art. The
proposed method can help improve the quality and scan efficiency of
multi-contrast MRI exams.
| null |
http://arxiv.org/abs/1805.10704v1
|
http://arxiv.org/pdf/1805.10704v1.pdf
| null |
[
"Salman Ul Hassan Dar",
"Mahmut Yurt",
"Mohammad Shahdloo",
"Muhammed Emrullah Ildız",
"Tolga Çukur"
] |
[
"Anatomy"
] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exponential-convergence-rates-for-batch
|
1805.10694
| null | null |
Exponential convergence rates for Batch Normalization: The power of length-direction decoupling in non-convex optimization
|
Normalization techniques such as Batch Normalization have been applied
successfully for training deep neural networks. Yet, despite its apparent
empirical benefits, the reasons behind the success of Batch Normalization are
mostly hypothetical. We here aim to provide a more thorough theoretical
understanding from a classical optimization perspective. Our main contribution
towards this goal is the identification of various problem instances in the
realm of machine learning where % -- under certain assumptions-- Batch
Normalization can provably accelerate optimization. We argue that this
acceleration is due to the fact that Batch Normalization splits the
optimization task into optimizing length and direction of the parameters
separately. This allows gradient-based methods to leverage a favourable global
structure in the loss landscape that we prove to exist in Learning Halfspace
problems and neural network training with Gaussian inputs. We thereby turn
Batch Normalization from an effective practical heuristic into a provably
converging algorithm for these settings. Furthermore, we substantiate our
analysis with empirical evidence that suggests the validity of our theoretical
results in a broader context.
| null |
http://arxiv.org/abs/1805.10694v3
|
http://arxiv.org/pdf/1805.10694v3.pdf
| null |
[
"Jonas Kohler",
"Hadi Daneshmand",
"Aurelien Lucchi",
"Ming Zhou",
"Klaus Neymeyr",
"Thomas Hofmann"
] |
[] | 2018-05-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
}
] |
https://paperswithcode.com/paper/strategyproof-linear-regression-in-high
|
1805.10693
| null | null |
Strategyproof Linear Regression in High Dimensions
|
This paper is part of an emerging line of work at the intersection of machine
learning and mechanism design, which aims to avoid noise in training data by
correctly aligning the incentives of data sources. Specifically, we focus on
the ubiquitous problem of linear regression, where strategyproof mechanisms
have previously been identified in two dimensions. In our setting, agents have
single-peaked preferences and can manipulate only their response variables. Our
main contribution is the discovery of a family of group strategyproof linear
regression mechanisms in any number of dimensions, which we call generalized
resistant hyperplane mechanisms. The game-theoretic properties of these
mechanisms -- and, in fact, their very existence -- are established through a
connection to a discrete version of the Ham Sandwich Theorem.
| null |
http://arxiv.org/abs/1805.10693v1
|
http://arxiv.org/pdf/1805.10693v1.pdf
| null |
[
"Yiling Chen",
"Chara Podimata",
"Ariel D. Procaccia",
"Nisarg Shah"
] |
[
"regression",
"Vocal Bursts Intensity Prediction"
] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/compact-and-computationally-efficient
|
1805.10692
| null | null |
Compact and Computationally Efficient Representation of Deep Neural Networks
|
At the core of any inference procedure in deep neural networks are dot
product operations, which are the component that require the highest
computational resources. A common approach to reduce the cost of inference is
to reduce its memory complexity by lowering the entropy of the weight matrices
of the neural network, e.g., by pruning and quantizing their elements. However,
the quantized weight matrices are then usually represented either by a dense or
sparse matrix storage format, whose associated dot product complexity is not
bounded by the entropy of the matrix. This means that the associated inference
complexity ultimately depends on the implicit statistical assumptions that
these matrix representations make about the weight distribution, which can be
in many cases suboptimal. In this paper we address this issue and present new
efficient representations for matrices with low entropy statistics. These new
matrix formats have the novel property that their memory and algorithmic
complexity are implicitly bounded by the entropy of the matrix, consequently
implying that they are guaranteed to become more efficient as the entropy of
the matrix is being reduced. In our experiments we show that performing the dot
product under these new matrix formats can indeed be more energy and time
efficient under practically relevant assumptions. For instance, we are able to
attain up to x42 compression ratios, x5 speed ups and x90 energy savings when
we convert in a lossless manner the weight matrices of state-of-the-art
networks such as AlexNet, VGG-16, ResNet152 and DenseNet into the new matrix
formats and benchmark their respective dot product operation.
| null |
http://arxiv.org/abs/1805.10692v2
|
http://arxiv.org/pdf/1805.10692v2.pdf
| null |
[
"Simon Wiedemann",
"Klaus-Robert Müller",
"Wojciech Samek"
] |
[] | 2018-05-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "https://github.com/lorenzopapa5/SPEED",
"description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.",
"full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings",
"introduced_year": 2000,
"main_collection": null,
"name": "SPEED",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/1c5c289b6218eb1026dcb5fd9738231401cfccea/torch/nn/modules/normalization.py#L13",
"description": "**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and increasing sensory perception. In practice, we can either normalize within the same channel or normalize across channels when we apply LRN to convolutional neural networks.\r\n\r\n$$ b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n}\\sum_{c'=\\max(0, c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta} $$\r\n\r\nWhere the size is the number of neighbouring channels used for normalization, $\\alpha$ is multiplicative factor, $\\beta$ an exponent and $k$ an additive factor",
"full_name": "Local Response Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Local Response Normalization",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/densenet.py#L93",
"description": "A **Dense Block** is a module used in convolutional neural networks that connects *all layers* (with matching feature-map sizes) directly with each other. It was originally proposed as part of the [DenseNet](https://paperswithcode.com/method/densenet) architecture. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. In contrast to [ResNets](https://paperswithcode.com/method/resnet), we never combine features through summation before they are passed into a layer; instead, we combine features by concatenating them. Hence, the $\\ell^{th}$ layer has $\\ell$ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all $L-\\ell$ subsequent layers. This introduces $\\frac{L(L+1)}{2}$ connections in an $L$-layer network, instead of just $L$, as in traditional architectures: \"dense connectivity\".",
"full_name": "Dense Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Dense Block",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/dansuh17/alexnet-pytorch/blob/d0c1b1c52296ffcbecfbf5b17e1d1685b4ca6744/model.py#L40",
"description": "To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.ggfdf\r\n\r\n\r\nHow do I speak to a person at Expedia?How do I speak to a person at Expedia?To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.\r\n\r\n\r\n\r\nTo make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.To make a reservation or communicate with Expedia, the quickest option is typically to call their customer service at +1-805-330-4056 or +1-805-330-4056. You can also use the live chat feature on their website or app, or contact them via social media.chgd",
"full_name": "How do I speak to a person at Expedia?-/+/",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "How do I speak to a person at Expedia?-/+/",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, XRP has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a XRP transaction not confirmed, your XRP wallet not showing balance, or you're trying to recover a lost XRP wallet, knowing where to get help is essential. That’s why the XRP customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the XRP Customer Support Number +1-833-534-1729\r\nXRP operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. XRP Transaction Not Confirmed\r\nOne of the most common concerns is when a XRP transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. XRP Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A XRP wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost XRP Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost XRP wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. XRP Deposit Not Received\r\nIf someone has sent you XRP but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A XRP deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. XRP Transaction Stuck or Pending\r\nSometimes your XRP transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. XRP Wallet Recovery Phrase Issue\r\nYour 12 or 24-word XRP wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the XRP Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and XRP tech.\r\n\r\n24/7 Availability: XRP doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About XRP Support and Wallet Issues\r\nQ1: Can XRP support help me recover stolen BTC?\r\nA: While XRP transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: XRP transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not XRP’s official number (XRP is decentralized), it connects you to trained professionals experienced in resolving all major XRP issues.\r\n\r\nFinal Thoughts\r\nXRP is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a XRP transaction not confirmed, your XRP wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the XRP customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "XRP Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "XRP Customer Service Number +1-833-534-1729",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
}
] |
https://paperswithcode.com/paper/identifying-object-states-in-cooking-related
|
1805.06956
| null | null |
Identifying Object States in Cooking-Related Images
|
Understanding object states is as important as object recognition for robotic
task planning and manipulation. To our knowledge, this paper explicitly
introduces and addresses the state identification problem in cooking related
images for the first time. In this paper, objects and ingredients in cooking
videos are explored and the most frequent objects are analyzed. Eleven states
from the most frequent cooking objects are examined and a dataset of images
containing those objects and their states is created. As a solution to the
state identification problem, a Resnet based deep model is proposed. The model
is initialized with Imagenet weights and trained on the dataset of eleven
classes. The trained state identification model is evaluated on a subset of the
Imagenet dataset and state labels are provided using a combination of the model
with manual checking. Moreover, an individual model is fine-tuned for each
object in the dataset using the weights from the initially trained model and
object-specific images, where significant improvement is demonstrated.
| null |
http://arxiv.org/abs/1805.06956v3
|
http://arxiv.org/pdf/1805.06956v3.pdf
| null |
[
"Ahmad Babaeian Jelodar",
"Md Sirajus Salekin",
"Yu Sun"
] |
[
"Object",
"Object Recognition",
"Task Planning"
] | 2018-05-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
}
] |
https://paperswithcode.com/paper/gan-q-learning
|
1805.04874
| null | null |
GAN Q-learning
|
Distributional reinforcement learning (distributional RL) has seen empirical
success in complex Markov Decision Processes (MDPs) in the setting of nonlinear
function approximation. However, there are many different ways in which one can
leverage the distributional approach to reinforcement learning. In this paper,
we propose GAN Q-learning, a novel distributional RL method based on generative
adversarial networks (GANs) and analyze its performance in simple tabular
environments, as well as OpenAI Gym. We empirically show that our algorithm
leverages the flexibility and blackbox approach of deep learning models while
providing a viable alternative to traditional methods.
|
Distributional reinforcement learning (distributional RL) has seen empirical success in complex Markov Decision Processes (MDPs) in the setting of nonlinear function approximation.
|
http://arxiv.org/abs/1805.04874v3
|
http://arxiv.org/pdf/1805.04874v3.pdf
| null |
[
"Thang Doan",
"Bogdan Mazoure",
"Clare Lyle"
] |
[
"Distributional Reinforcement Learning",
"OpenAI Gym",
"Q-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-13T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/endnet-sparse-autoencoder-network-for
|
1708.01894
| null | null |
EndNet: Sparse AutoEncoder Network for Endmember Extraction and Hyperspectral Unmixing
|
Data acquired from multi-channel sensors is a highly valuable asset to
interpret the environment for a variety of remote sensing applications.
However, low spatial resolution is a critical limitation for previous sensors
and the constituent materials of a scene can be mixed in different fractions
due to their spatial interactions. Spectral unmixing is a technique that allows
us to obtain the material spectral signatures and their fractions from
hyperspectral data. In this paper, we propose a novel endmember extraction and
hyperspectral unmixing scheme, so called \textit{EndNet}, that is based on a
two-staged autoencoder network. This well-known structure is completely
enhanced and restructured by introducing additional layers and a projection
metric (i.e., spectral angle distance (SAD) instead of inner product) to
achieve an optimum solution. Moreover, we present a novel loss function that is
composed of a Kullback-Leibler divergence term with SAD similarity and
additional penalty terms to improve the sparsity of the estimates. These
modifications enable us to set the common properties of endmembers such as
non-linearity and sparsity for autoencoder networks. Lastly, due to the
stochastic-gradient based approach, the method is scalable for large-scale data
and it can be accelerated on Graphical Processing Units (GPUs). To demonstrate
the superiority of our proposed method, we conduct extensive experiments on
several well-known datasets. The results confirm that the proposed method
considerably improves the performance compared to the state-of-the-art
techniques in literature.
|
Data acquired from multi-channel sensors is a highly valuable asset to interpret the environment for a variety of remote sensing applications.
|
http://arxiv.org/abs/1708.01894v4
|
http://arxiv.org/pdf/1708.01894v4.pdf
| null |
[
"Savas Ozkan",
"Berk Kaya",
"Gozde Bozdagi Akar"
] |
[
"Hyperspectral Unmixing"
] | 2017-08-06T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/inference-suboptimality-in-variational
|
1801.03558
| null |
Bki4EfWCb
|
Inference Suboptimality in Variational Autoencoders
|
Amortized inference allows latent-variable models trained via variational
learning to scale to large datasets. The quality of approximate inference is
determined by two factors: a) the capacity of the variational distribution to
match the true posterior and b) the ability of the recognition network to
produce good variational parameters for each datapoint. We examine approximate
inference in variational autoencoders in terms of these factors. We find that
divergence from the true posterior is often due to imperfect recognition
networks, rather than the limited complexity of the approximating distribution.
We show that this is due partly to the generator learning to accommodate the
choice of approximation. Furthermore, we show that the parameters used to
increase the expressiveness of the approximation play a role in generalizing
inference rather than simply improving the complexity of the approximation.
|
Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.
|
http://arxiv.org/abs/1801.03558v3
|
http://arxiv.org/pdf/1801.03558v3.pdf
|
ICML 2018 7
|
[
"Chris Cremer",
"Xuechen Li",
"David Duvenaud"
] |
[] | 2018-01-10T00:00:00 |
https://icml.cc/Conferences/2018/Schedule?showEvent=2425
|
http://proceedings.mlr.press/v80/cremer18a/cremer18a.pdf
|
inference-suboptimality-in-variational-1
| null |
[] |
https://paperswithcode.com/paper/adversarial-deformation-regularization-for
|
1805.10665
| null | null |
Adversarial Deformation Regularization for Training Image Registration Neural Networks
|
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.
|
During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation.
|
http://arxiv.org/abs/1805.10665v1
|
http://arxiv.org/pdf/1805.10665v1.pdf
| null |
[
"Yipeng Hu",
"Eli Gibson",
"Nooshin Ghavami",
"Ester Bonmati",
"Caroline M. Moore",
"Mark Emberton",
"Tom Vercauteren",
"J. Alison Noble",
"Dean C. Barratt"
] |
[
"Image Registration"
] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fingerprint-policy-optimisation-for-robust
|
1805.10662
| null | null |
Fingerprint Policy Optimisation for Robust Reinforcement Learning
|
Policy gradient methods ignore the potential value of adjusting environment variables: unobservable state features that are randomly determined by the environment in a physical setting, but are controllable in a simulator. This can lead to slow learning, or convergence to suboptimal policies, if the environment variable has a large impact on the transition dynamics. In this paper, we present fingerprint policy optimisation (FPO), which finds a policy that is optimal in expectation across the distribution of environment variables. The central idea is to use Bayesian optimisation (BO) to actively select the distribution of the environment variable that maximises the improvement generated by each iteration of the policy gradient method. To make this BO practical, we contribute two easy-to-compute low-dimensional fingerprints of the current policy. Our experiments show that FPO can efficiently learn policies that are robust to significant rare events, which are unlikely to be observable under random sampling, but are key to learning good policies.
| null |
https://arxiv.org/abs/1805.10662v3
|
https://arxiv.org/pdf/1805.10662v3.pdf
| null |
[
"Supratik Paul",
"Michael A. Osborne",
"Shimon Whiteson"
] |
[
"Bayesian Optimisation",
"Continuous Control",
"Policy Gradient Methods",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/graph-sketching-based-space-efficient-data
|
1703.02375
| null | null |
Graph sketching-based Space-efficient Data Clustering
|
In this paper, we address the problem of recovering arbitrary-shaped data
clusters from datasets while facing \emph{high space constraints}, as this is
for instance the case in many real-world applications when analysis algorithms
are directly deployed on resources-limited mobile devices collecting the data.
We present DBMSTClu a new space-efficient density-based \emph{non-parametric}
method working on a Minimum Spanning Tree (MST) recovered from a limited number
of linear measurements i.e. a \emph{sketched} version of the dissimilarity
graph $\mathcal{G}$ between the $N$ objects to cluster. Unlike $k$-means,
$k$-medians or $k$-medoids algorithms, it does not fail at distinguishing
clusters with particular forms thanks to the property of the MST for expressing
the underlying structure of a graph. No input parameter is needed contrarily to
DBSCAN or the Spectral Clustering method. An approximate MST is retrieved by
following the dynamic \emph{semi-streaming} model in handling the dissimilarity
graph $\mathcal{G}$ as a stream of edge weight updates which is sketched in one
pass over the data into a compact structure requiring $O(N
\operatorname{polylog}(N))$ space, far better than the theoretical memory cost
$O(N^2)$ of $\mathcal{G}$. The recovered approximate MST $\mathcal{T}$ as
input, DBMSTClu then successfully detects the right number of nonconvex
clusters by performing relevant cuts on $\mathcal{T}$ in a time linear in $N$.
We provide theoretical guarantees on the quality of the clustering partition
and also demonstrate its advantage over the existing state-of-the-art on
several datasets.
|
In this paper, we address the problem of recovering arbitrary-shaped data clusters from datasets while facing \emph{high space constraints}, as this is for instance the case in many real-world applications when analysis algorithms are directly deployed on resources-limited mobile devices collecting the data.
|
http://arxiv.org/abs/1703.02375v5
|
http://arxiv.org/pdf/1703.02375v5.pdf
| null |
[
"Anne Morvan",
"Krzysztof Choromanski",
"Cédric Gouy-Pailler",
"Jamal Atif"
] |
[
"Clustering"
] | 2017-03-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Spectral clustering has attracted increasing attention due to\r\nthe promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus,",
"full_name": "Spectral Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Spectral Clustering",
"source_title": "A Tutorial on Spectral Clustering",
"source_url": "http://arxiv.org/abs/0711.0189v1"
}
] |
https://paperswithcode.com/paper/generalization-challenges-for-neural
|
1803.08629
| null | null |
Generalization Challenges for Neural Architectures in Audio Source Separation
|
Recent work has shown that recurrent neural networks can be trained to
separate individual speakers in a sound mixture with high fidelity. Here we
explore convolutional neural network models as an alternative and show that
they achieve state-of-the-art results with an order of magnitude fewer
parameters. We also characterize and compare the robustness and ability of
these different approaches to generalize under three different test conditions:
longer time sequences, the addition of intermittent noise, and different
datasets not seen during training. For the last condition, we create a new
dataset, RealTalkLibri, to test source separation in real-world environments.
We show that the acoustics of the environment have significant impact on the
structure of the waveform and the overall performance of neural network models,
with the convolutional model showing superior ability to generalize to new
environments. The code for our study is available at
https://github.com/ShariqM/source_separation.
|
Recent work has shown that recurrent neural networks can be trained to separate individual speakers in a sound mixture with high fidelity.
|
http://arxiv.org/abs/1803.08629v2
|
http://arxiv.org/pdf/1803.08629v2.pdf
| null |
[
"Shariq Mobin",
"Brian Cheung",
"Bruno Olshausen"
] |
[
"Audio Source Separation"
] | 2018-03-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/defending-against-adversarial-attacks-by
|
1805.10652
| null | null |
Defending Against Adversarial Attacks by Leveraging an Entire GAN
|
Recent work has shown that state-of-the-art models are highly vulnerable to
adversarial perturbations of the input. We propose cowboy, an approach to
detecting and defending against adversarial attacks by using both the
discriminator and generator of a GAN trained on the same dataset. We show that
the discriminator consistently scores the adversarial samples lower than the
real samples across multiple attacks and datasets. We provide empirical
evidence that adversarial samples lie outside of the data manifold learned by
the GAN. Based on this, we propose a cleaning method which uses both the
discriminator and generator of the GAN to project the samples back onto the
data manifold. This cleaning procedure is independent of the classifier and
type of attack and thus can be deployed in existing systems.
|
Based on this, we propose a cleaning method which uses both the discriminator and generator of the GAN to project the samples back onto the data manifold.
|
http://arxiv.org/abs/1805.10652v1
|
http://arxiv.org/pdf/1805.10652v1.pdf
| null |
[
"Gokula Krishnan Santhanam",
"Paulina Grnarova"
] |
[] | 2018-05-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/hierarchical-correlation-reconstruction-with
|
1804.06218
| null | null |
Hierarchical correlation reconstruction with missing data, for example for biology-inspired neuron
|
Machine learning often needs to model density from a multidimensional data
sample, including correlations between coordinates. Additionally, we often have
missing data case: that data points can miss values for some of coordinates.
This article adapts rapid parametric density estimation approach for this
purpose: modelling density as a linear combination of orthonormal functions,
for which $L^2$ optimization says that (independently) estimated coefficient
for a given function is just average over the sample of value of this function.
Hierarchical correlation reconstruction first models probability density for
each separate coordinate using all its appearances in data sample, then adds
corrections from independently modelled pairwise correlations using all samples
having both coordinates, and so on independently adding correlations for
growing numbers of variables using often decreasing evidence in data sample. A
basic application of such modelled multidimensional density can be imputation
of missing coordinates: by inserting known coordinates to the density, and
taking expected values for the missing coordinates, or even their entire joint
probability distribution. Presented method can be compared with cascade
correlations approach, offering several advantages in flexibility and accuracy.
It can be also used as artificial neuron: maximizing prediction capabilities
for only local behavior - modelling and predicting local connections.
| null |
http://arxiv.org/abs/1804.06218v4
|
http://arxiv.org/pdf/1804.06218v4.pdf
| null |
[
"Jarek Duda"
] |
[
"Density Estimation",
"Imputation"
] | 2018-04-17T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.