paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/multi-function-convolutional-neural-networks
|
1805.11788
| null | null |
Multi-function Convolutional Neural Networks for Improving Image Classification Performance
|
Traditional Convolutional Neural Networks (CNNs) typically use the same
activation function (usually ReLU) for all neurons with non-linear mapping
operations. For example, the deep convolutional architecture Inception-v4 uses
ReLU. To improve the classification performance of traditional CNNs, a new
"Multi-function Convolutional Neural Network" (MCNN) is created by using
different activation functions for different neurons. For $n$ neurons and $m$
different activation functions, there are a total of $m^n-m$ MCNNs and only $m$
traditional CNNs. Therefore, the best model is very likely to be chosen from
MCNNs because there are $m^n-2m$ more MCNNs than traditional CNNs. For
performance analysis, two different datasets for two applications (classifying
handwritten digits from the MNIST database and classifying brain MRI images
into one of the four stages of Alzheimer's disease (AD)) are used. For both
applications, an activation function is randomly selected for each layer of a
MCNN. For the AD diagnosis application, MCNNs using a newly created
multi-function Inception-v4 architecture are constructed. Overall, simulations
show that MCNNs can outperform traditional CNNs in terms of multi-class
classification accuracy for both applications. An important future research
work will be to efficiently select the best MCNN from $m^n-m$ candidate MCNNs.
Current CNN software only provides users with partial functionality of MCNNs
since different layers can use different activation functions but not
individual neurons in the same layer. Thus, modifying current CNN software
systems such as ResNets, DenseNets, and Dual Path Networks by using multiple
activation functions and developing more effective and faster MCNN software
systems and tools would be very useful to solve difficult practical image
classification problems.
| null |
http://arxiv.org/abs/1805.11788v1
|
http://arxiv.org/pdf/1805.11788v1.pdf
| null |
[
"Luna M. Zhang"
] |
[
"Classification",
"General Classification",
"image-classification",
"Image Classification",
"Multi-class Classification"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/prlz77/ResNeXt.pytorch/blob/39fb8d03847f26ec02fb9b880ecaaa88db7a7d16/models/model.py#L42",
"description": "A **Grouped Convolution** uses a group of convolutions - multiple kernels per layer - resulting in multiple channel outputs per layer. This leads to wider networks helping a network learn a varied set of low level and high level features. The original motivation of using Grouped Convolutions in [AlexNet](https://paperswithcode.com/method/alexnet) was to distribute the model over multiple GPUs as an engineering compromise. But later, with models such as [ResNeXt](https://paperswithcode.com/method/resnext), it was shown this module could be used to improve classification accuracy. Specifically by exposing a new dimension through grouped convolutions, *cardinality* (the size of set of transformations), we can increase accuracy by increasing it.",
"full_name": "Grouped Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Grouped Convolution",
"source_title": "ImageNet Classification with Deep Convolutional Neural Networks",
"source_url": "http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/densenet.py#L93",
"description": "A **Dense Block** is a module used in convolutional neural networks that connects *all layers* (with matching feature-map sizes) directly with each other. It was originally proposed as part of the [DenseNet](https://paperswithcode.com/method/densenet) architecture. To preserve the feed-forward nature, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. In contrast to [ResNets](https://paperswithcode.com/method/resnet), we never combine features through summation before they are passed into a layer; instead, we combine features by concatenating them. Hence, the $\\ell^{th}$ layer has $\\ell$ inputs, consisting of the feature-maps of all preceding convolutional blocks. Its own feature-maps are passed on to all $L-\\ell$ subsequent layers. This introduces $\\frac{L(L+1)}{2}$ connections in an $L$-layer network, instead of just $L$, as in traditional architectures: \"dense connectivity\".",
"full_name": "Dense Block",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.",
"name": "Image Model Blocks",
"parent": null
},
"name": "Dense Block",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, XRP has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a XRP transaction not confirmed, your XRP wallet not showing balance, or you're trying to recover a lost XRP wallet, knowing where to get help is essential. That’s why the XRP customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the XRP Customer Support Number +1-833-534-1729\r\nXRP operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. XRP Transaction Not Confirmed\r\nOne of the most common concerns is when a XRP transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. XRP Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A XRP wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost XRP Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost XRP wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. XRP Deposit Not Received\r\nIf someone has sent you XRP but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A XRP deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. XRP Transaction Stuck or Pending\r\nSometimes your XRP transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. XRP Wallet Recovery Phrase Issue\r\nYour 12 or 24-word XRP wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the XRP Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and XRP tech.\r\n\r\n24/7 Availability: XRP doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About XRP Support and Wallet Issues\r\nQ1: Can XRP support help me recover stolen BTC?\r\nA: While XRP transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: XRP transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not XRP’s official number (XRP is decentralized), it connects you to trained professionals experienced in resolving all major XRP issues.\r\n\r\nFinal Thoughts\r\nXRP is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a XRP transaction not confirmed, your XRP wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the XRP customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "XRP Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "XRP Customer Service Number +1-833-534-1729",
"source_title": "Densely Connected Convolutional Networks",
"source_url": "http://arxiv.org/abs/1608.06993v5"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/rwightman/pytorch-dpn-pretrained/blob/2923586d8f4ab3fdc05370cc409a620a3dbd1083/dpn.py#L205",
"description": "A **Dual Path Network** block is an image model block used in convolutional neural network. The idea of this module is to enable sharing of common features while maintaining the flexibility to explore new features through dual path architectures. In this sense it combines the benefits of [ResNets](https://paperswithcode.com/method/resnet) and [DenseNets](https://paperswithcode.com/method/densenet). It was proposed as part of the [DPN](https://paperswithcode.com/method/dpn) CNN architecture.\r\n\r\nWe formulate such a dual path architecture as follows:\r\n\r\n$$x^{k} = \\sum\\limits\\_{t=1}^{k-1} f\\_t^{k}(h^t) \\text{,} $$\r\n\r\n$$\r\ny^{k} = \\sum\\limits\\_{t=1}^{k-1} v\\_t(h^t) = y^{k-1} + \\phi^{k-1}(y^{k-1}) \\text{,} \\\\\\\\\r\n$$\r\n\r\n$$\r\nr^{k} = x^{k} + y^{k} \\text{,} \\\\\\\\\r\n$$\r\n\r\n$$\r\nh^k = g^k \\left( r^{k} \\right) \\text{,}\r\n$$\r\n\r\nwhere $x^{k}$ and $y^{k}$ denote the extracted information at $k$-th step from individual path, $v_t(\\cdot)$ is a feature learning function as $f_t^k(\\cdot)$. The first equation refers to the densely connected path that enables exploring new features. The second equation refers to the residual path that enables common features re-usage. The third equation defines the dual path that integrates them and feeds them to the last transformation function in the last equation.",
"full_name": "DPN Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "DPN Block",
"source_title": "Dual Path Networks",
"source_url": "http://arxiv.org/abs/1707.01629v2"
},
{
"code_snippet_url": "https://github.com/osmr/imgclsmob/blob/c03fa67de3c9e454e9b6d35fe9cbb6b15c28fda7/pytorch/pytorchcv/models/dpn.py#L322",
"description": "A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while [DenseNet](https://paperswithcode.com/method/densenet) enables new feature exploration, and both are important for learning good representations. To enjoy the benefits from both path topologies, Dual Path Networks share common features while maintaining the flexibility to explore new features through dual path architectures. \r\n\r\nWe formulate such a dual path architecture as follows:\r\n\r\n$$x^{k} = \\sum\\limits\\_{t=1}^{k-1} f\\_t^{k}(h^t) \\text{,} $$\r\n\r\n$$\r\ny^{k} = \\sum\\limits\\_{t=1}^{k-1} v\\_t(h^t) = y^{k-1} + \\phi^{k-1}(y^{k-1}) \\text{,} \\\\\\\\\r\n$$\r\n\r\n$$\r\nr^{k} = x^{k} + y^{k} \\text{,} \\\\\\\\\r\n$$\r\n\r\n$$\r\nh^k = g^k \\left( r^{k} \\right) \\text{,}\r\n$$\r\n\r\nwhere $x^{k}$ and $y^{k}$ denote the extracted information at $k$-th step from individual path, $v_t(\\cdot)$ is a feature learning function as $f_t^k(\\cdot)$. The first equation refers to the densely connected path that enables exploring new features. The second equation refers to the residual path that enables common features re-usage. The third equation defines the dual path that integrates them and feeds them to the last transformation function in the last equation.",
"full_name": "Dual Path Network",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "DPN",
"source_title": "Dual Path Networks",
"source_url": "http://arxiv.org/abs/1707.01629v2"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/differentiable-particle-filters-end-to-end
|
1805.11122
| null | null |
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors
|
We present differentiable particle filters (DPFs): a differentiable
implementation of the particle filter algorithm with learnable motion and
measurement models. Since DPFs are end-to-end differentiable, we can
efficiently train their models by optimizing end-to-end state estimation
performance, rather than proxy objectives such as model accuracy. DPFs encode
the structure of recursive state estimation with prediction and measurement
update that operate on a probability distribution over states. This structure
represents an algorithmic prior that improves learning performance in state
estimation problems while enabling explainability of the learned model. Our
experiments on simulated and real data show substantial benefits from end-to-
end learning with algorithmic priors, e.g. reducing error rates by ~80%. Our
experiments also show that, unlike long short-term memory networks, DPFs learn
localization in a policy-agnostic way and thus greatly improve generalization.
Source code is available at
https://github.com/tu-rbo/differentiable-particle-filters .
|
We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models.
|
http://arxiv.org/abs/1805.11122v2
|
http://arxiv.org/pdf/1805.11122v2.pdf
| null |
[
"Rico Jonschkowski",
"Divyam Rastogi",
"Oliver Brock"
] |
[
"State Estimation"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hyperspectral-imaging-technology-and-transfer
|
1805.11784
| null | null |
Hyperspectral Imaging Technology and Transfer Learning Utilized in Identification Haploid Maize Seeds
|
It is extremely important to correctly identify the cultivars of maize seeds
in the breeding process of maize. In this paper, the transfer learning as a
method of deep learning is adopted to establish a model by combining with the
hyperspectral imaging technology. The haploid seeds can be recognized from
large amount of diploid maize ones with great accuracy through the model.
First, the information of maize seeds on each wave band is collected using the
hyperspectral imaging technology, and then the recognition model is built on
VGG-19 network, which is pre-trained by large-scale computer vision database
(Image-Net). The correct identification rate of model utilizing seed spectral
images containing 256 wave bands (862.5-1704.2nm) reaches 96.32%, and the
correct identification rate of the model utilizing the seed spectral images
containing single-band reaches 95.75%. The experimental results show that, CNN
model which is pre-trained by visible light image database can be applied to
the near-infrared hyperspectral imaging-based identification of maize seeds,
and high accurate identification rate can be achieved. Meanwhile, when there is
small amount of data samples, it can still realize high recognition by using
transfer learning. The model not only meets the requirements of breeding
recognition, but also greatly reduce the cost occurred in sample collection.
| null |
http://arxiv.org/abs/1805.11784v1
|
http://arxiv.org/pdf/1805.11784v1.pdf
| null |
[
"Wen-Xuan Liao",
"Xuan-Yu Wang",
"Dong An",
"Yao-Guang Wei"
] |
[
"Transfer Learning"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rice-classification-using-spatio-spectral
|
1805.11491
| null | null |
Rice Classification Using Spatio-Spectral Deep Convolutional Neural Network
|
Rice has been one of the staple foods that contribute significantly to human food supplies. Numerous rice varieties have been cultivated, imported, and exported worldwide. Different rice varieties could be mixed during rice production and trading. Rice impurities could damage the trust between rice importers and exporters, calling for the need to develop a rice variety inspection system. In this work, we develop a non-destructive rice variety classification system that benefits from the synergy between hyperspectral imaging and deep convolutional neural network (CNN). The proposed method uses a hyperspectral imaging system to simultaneously acquire complementary spatial and spectral information of rice seeds. The rice varieties are then determined from the acquired spatio-spectral data using a deep CNN. As opposed to several existing rice variety classification methods that require hand-engineered features, the proposed method automatically extracts spatio-spectral features from the raw sensor data. As demonstrated using two types of rice datasets, the proposed method achieved up to 11.9% absolute improvement in the mean classification accuracy, compared to the commonly used classification methods based on support vector machines.
|
In this work, we develop a non-destructive rice variety classification system that benefits from the synergy between hyperspectral imaging and deep convolutional neural network (CNN).
|
https://arxiv.org/abs/1805.11491v3
|
https://arxiv.org/pdf/1805.11491v3.pdf
| null |
[
"Itthi Chatnuntawech",
"Kittipong Tantisantisom",
"Paisan Khanchaitit",
"Thitikorn Boonkoom",
"Berkin Bilgic",
"Ekapol Chuangsuwanich"
] |
[
"Classification",
"General Classification"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/to-trust-or-not-to-trust-a-classifier
|
1805.11783
| null | null |
To Trust Or Not To Trust A Classifier
|
Knowing when a classifier's prediction can be trusted is useful in many
applications and critical for safely using AI. While the bulk of the effort in
machine learning research has been towards improving classifier performance,
understanding when a classifier's predictions should and should not be trusted
has received far less attention. The standard approach is to use the
classifier's discriminant or confidence score; however, we show there exists an
alternative that is more effective in many situations. We propose a new score,
called the trust score, which measures the agreement between the classifier and
a modified nearest-neighbor classifier on the testing example. We show
empirically that high (low) trust scores produce surprisingly high precision at
identifying correctly (incorrectly) classified examples, consistently
outperforming the classifier's confidence score as well as many other
baselines. Further, under some mild distributional assumptions, we show that if
the trust score for an example is high (low), the classifier will likely agree
(disagree) with the Bayes-optimal classifier. Our guarantees consist of
non-asymptotic rates of statistical consistency under various nonparametric
settings and build on recent developments in topological data analysis.
|
Knowing when a classifier's prediction can be trusted is useful in many applications and critical for safely using AI.
|
http://arxiv.org/abs/1805.11783v2
|
http://arxiv.org/pdf/1805.11783v2.pdf
|
NeurIPS 2018 12
|
[
"Heinrich Jiang",
"Been Kim",
"Melody Y. Guan",
"Maya Gupta"
] |
[
"Topological Data Analysis"
] | 2018-05-30T00:00:00 |
http://papers.nips.cc/paper/7798-to-trust-or-not-to-trust-a-classifier
|
http://papers.nips.cc/paper/7798-to-trust-or-not-to-trust-a-classifier.pdf
|
to-trust-or-not-to-trust-a-classifier-1
| null |
[] |
https://paperswithcode.com/paper/a-neural-network-trained-to-predict-future
|
1805.10734
| null | null |
A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception
|
While deep neural networks take loose inspiration from neuroscience, it is an
open question how seriously to take the analogies between artificial deep
networks and biological neuronal systems. Interestingly, recent work has shown
that deep convolutional neural networks (CNNs) trained on large-scale image
recognition tasks can serve as strikingly good models for predicting the
responses of neurons in visual cortex to visual stimuli, suggesting that
analogies between artificial and biological neural networks may be more than
superficial. However, while CNNs capture key properties of the average
responses of cortical neurons, they fail to explain other properties of these
neurons. For one, CNNs typically require large quantities of labeled input data
for training. Our own brains, in contrast, rarely have access to this kind of
supervision, so to the extent that representations are similar between CNNs and
brains, this similarity must arise via different training paths. In addition,
neurons in visual cortex produce complex time-varying responses even to static
inputs, and they dynamically tune themselves to temporal regularities in the
visual environment. We argue that these differences are clues to fundamental
differences between the computations performed in the brain and in deep
networks. To begin to close the gap, here we study the emergent properties of a
previously-described recurrent generative network that is trained to predict
future video frames in a self-supervised manner. Remarkably, the model is able
to capture a wide variety of seemingly disparate phenomena observed in visual
cortex, ranging from single unit response dynamics to complex perceptual motion
illusions. These results suggest potentially deep connections between recurrent
predictive neural network models and the brain, providing new leads that can
enrich both fields.
| null |
http://arxiv.org/abs/1805.10734v2
|
http://arxiv.org/pdf/1805.10734v2.pdf
| null |
[
"William Lotter",
"Gabriel Kreiman",
"David Cox"
] |
[
"Open-Ended Question Answering",
"Predict Future Video Frames"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/non-rigid-reconstruction-with-a-single-moving
|
1805.11219
| null | null |
Non-rigid Reconstruction with a Single Moving RGB-D Camera
|
We present a novel non-rigid reconstruction method using a moving RGB-D
camera. Current approaches use only non-rigid part of the scene and completely
ignore the rigid background. Non-rigid parts often lack sufficient geometric
and photometric information for tracking large frame-to-frame motion. Our
approach uses camera pose estimated from the rigid background for foreground
tracking. This enables robust foreground tracking in situations where large
frame-to-frame motion occurs. Moreover, we are proposing a multi-scale
deformation graph which improves non-rigid tracking without compromising the
quality of the reconstruction. We are also contributing a synthetic dataset
which is made publically available for evaluating non-rigid reconstruction
methods. The dataset provides frame-by-frame ground truth geometry of the
scene, the camera trajectory, and masks for background foreground. Experimental
results show that our approach is more robust in handling larger frame-to-frame
motions and provides better reconstruction compared to state-of-the-art
approaches.
| null |
http://arxiv.org/abs/1805.11219v2
|
http://arxiv.org/pdf/1805.11219v2.pdf
| null |
[
"Shafeeq Elanattil",
"Peyman Moghadam",
"Sridha Sridharan",
"Clinton Fookes",
"Mark Cox"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/planning-inference-and-pragmatics-in
|
1805.11774
| null | null |
Planning, Inference and Pragmatics in Sequential Language Games
|
We study sequential language games in which two players, each with private
information, communicate to achieve a common goal. In such games, a successful
player must (i) infer the partner's private information from the partner's
messages, (ii) generate messages that are most likely to help with the goal,
and (iii) reason pragmatically about the partner's strategy. We propose a model
that captures all three characteristics and demonstrate their importance in
capturing human behavior on a new goal-oriented dataset we collected using
crowdsourcing.
|
We study sequential language games in which two players, each with private information, communicate to achieve a common goal.
|
http://arxiv.org/abs/1805.11774v1
|
http://arxiv.org/pdf/1805.11774v1.pdf
|
TACL 2018 1
|
[
"Fereshte Khani",
"Noah D. Goodman",
"Percy Liang"
] |
[] | 2018-05-30T00:00:00 |
https://aclanthology.org/Q18-1037
|
https://aclanthology.org/Q18-1037.pdf
|
planning-inference-and-pragmatics-in-1
| null |
[] |
https://paperswithcode.com/paper/autonomous-vehicles-that-interact-with
|
1805.11773
| null | null |
Autonomous Vehicles that Interact with Pedestrians: A Survey of Theory and Practice
|
One of the major challenges that autonomous cars are facing today is driving
in urban environments. To make it a reality, autonomous vehicles require the
ability to communicate with other road users and understand their intentions.
Such interactions are essential between the vehicles and pedestrians as the
most vulnerable road users. Understanding pedestrian behavior, however, is not
intuitive and depends on various factors such as demographics of the
pedestrians, traffic dynamics, environmental conditions, etc. In this paper, we
identify these factors by surveying pedestrian behavior studies, both the
classical works on pedestrian-driver interaction and the modern ones that
involve autonomous vehicles. To this end, we will discuss various methods of
studying pedestrian behavior, and analyze how the factors identified in the
literature are interrelated. We will also review the practical applications
aimed at solving the interaction problem including design approaches for
autonomous vehicles that communicate with pedestrians and visual perception and
reasoning algorithms tailored to understanding pedestrian intention. Based on
our findings, we will discuss the open problems and propose future research
directions.
| null |
http://arxiv.org/abs/1805.11773v1
|
http://arxiv.org/pdf/1805.11773v1.pdf
| null |
[
"Amir Rasouli",
"John K. Tsotsos"
] |
[
"Autonomous Vehicles"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-defense-training-dnns-with-improved
|
1803.00404
| null | null |
Deep Defense: Training DNNs with Improved Adversarial Robustness
|
Despite the efficacy on a variety of computer vision tasks, deep neural
networks (DNNs) are vulnerable to adversarial attacks, limiting their
applications in security-critical systems. Recent works have shown the
possibility of generating imperceptibly perturbed image inputs (a.k.a.,
adversarial examples) to fool well-trained DNN classifiers into making
arbitrary predictions. To address this problem, we propose a training recipe
named "deep defense". Our core idea is to integrate an adversarial
perturbation-based regularizer into the classification objective, such that the
obtained models learn to resist potential attacks, directly and precisely. The
whole optimization problem is solved just like training a recursive network.
Experimental results demonstrate that our method outperforms training with
adversarial/Parseval regularizations by large margins on various datasets
(including MNIST, CIFAR-10 and ImageNet) and different DNN architectures. Code
and models for reproducing our results are available at
https://github.com/ZiangYan/deepdefense.pytorch
|
Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems.
|
http://arxiv.org/abs/1803.00404v3
|
http://arxiv.org/pdf/1803.00404v3.pdf
|
NeurIPS 2018 12
|
[
"Ziang Yan",
"Yiwen Guo",
"Chang-Shui Zhang"
] |
[
"Adversarial Robustness"
] | 2018-02-23T00:00:00 |
http://papers.nips.cc/paper/7324-deep-defense-training-dnns-with-improved-adversarial-robustness
|
http://papers.nips.cc/paper/7324-deep-defense-training-dnns-with-improved-adversarial-robustness.pdf
|
deep-defense-training-dnns-with-improved-1
| null |
[] |
https://paperswithcode.com/paper/autozoom-autoencoder-based-zeroth-order
|
1805.11770
| null | null |
AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
|
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may give a false sense of model robustness due to inefficient query designs. To bridge this gap, we propose a generic framework for query-efficient black-box attacks. Our framework, AutoZOOM, which is short for Autoencoder-based Zeroth Order Optimization Method, has two novel building blocks towards efficient black-box attacks: (i) an adaptive random gradient estimation strategy to balance query counts and distortion, and (ii) an autoencoder that is either trained offline with unlabeled data or a bilinear resizing operation for attack acceleration. Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples. In particular, when compared to the standard ZOO method, AutoZOOM can consistently reduce the mean query counts in finding successful adversarial examples (or reaching the same distortion level) by at least 93% on MNIST, CIFAR-10 and ImageNet datasets, leading to novel insights on adversarial robustness.
|
Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting.
|
https://arxiv.org/abs/1805.11770v5
|
https://arxiv.org/pdf/1805.11770v5.pdf
| null |
[
"Chun-Chen Tu",
"Pai-Shun Ting",
"Pin-Yu Chen",
"Sijia Liu",
"huan zhang",
"Jin-Feng Yi",
"Cho-Jui Hsieh",
"Shin-Ming Cheng"
] |
[
"Adversarial Robustness"
] | 2018-05-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In today’s digital age, Solana has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Solana transaction not confirmed, your Solana wallet not showing balance, or you're trying to recover a lost Solana wallet, knowing where to get help is essential. That’s why the Solana customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Solana Customer Support Number +1-833-534-1729\r\nSolana operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Solana Transaction Not Confirmed\r\nOne of the most common concerns is when a Solana transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Solana Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Solana wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Solana Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Solana wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Solana Deposit Not Received\r\nIf someone has sent you Solana but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Solana deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Solana Transaction Stuck or Pending\r\nSometimes your Solana transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Solana Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Solana wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Solana Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Solana tech.\r\n\r\n24/7 Availability: Solana doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Solana Support and Wallet Issues\r\nQ1: Can Solana support help me recover stolen BTC?\r\nA: While Solana transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Solana transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Solana’s official number (Solana is decentralized), it connects you to trained professionals experienced in resolving all major Solana issues.\r\n\r\nFinal Thoughts\r\nSolana is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Solana transaction not confirmed, your Solana wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Solana customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Solana Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Solana Customer Service Number +1-833-534-1729",
"source_title": "Reducing the Dimensionality of Data with Neural Networks",
"source_url": "https://science.sciencemag.org/content/313/5786/504"
}
] |
https://paperswithcode.com/paper/fast-incremental-von-neumann-graph-entropy
|
1805.11769
| null | null |
Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications
|
The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence. It has been successfully applied to various learning tasks driven by network-based data. While effective, VNGE is computationally demanding as it requires the full eigenspectrum of the graph Laplacian matrix. In this paper, we propose a new computational framework, Fast Incremental von Neumann Graph EntRopy (FINGER), which approaches VNGE with a performance guarantee. FINGER reduces the cubic complexity of VNGE to linear complexity in the number of nodes and edges, and thus enables online computation based on incremental graph changes. We also show asymptotic equivalence of FINGER to the exact VNGE, and derive its approximation error bounds. Based on FINGER, we propose efficient algorithms for computing Jensen-Shannon distance between graphs. Our experimental results on different random graph models demonstrate the computational efficiency and the asymptotic equivalence of FINGER. In addition, we apply FINGER to two real-world applications and one synthesized anomaly detection dataset, and corroborate its superior performance over seven baseline graph similarity methods.
|
The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence.
|
https://arxiv.org/abs/1805.11769v2
|
https://arxiv.org/pdf/1805.11769v2.pdf
| null |
[
"Pin-Yu Chen",
"Lingfei Wu",
"Sijia Liu",
"Indika Rajapakse"
] |
[
"Anomaly Detection",
"Computational Efficiency",
"Graph Similarity"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hone-higher-order-network-embeddings
|
1801.09303
| null | null |
HONE: Higher-Order Network Embeddings
|
This paper describes a general framework for learning Higher-Order Network
Embeddings (HONE) from graph data based on network motifs. The HONE framework
is highly expressive and flexible with many interchangeable components. The
experimental results demonstrate the effectiveness of learning higher-order
network representations. In all cases, HONE outperforms recent embedding
methods that are unable to capture higher-order structures with a mean relative
gain in AUC of $19\%$ (and up to $75\%$ gain) across a wide variety of networks
and embedding methods.
| null |
http://arxiv.org/abs/1801.09303v2
|
http://arxiv.org/pdf/1801.09303v2.pdf
| null |
[
"Ryan A. Rossi",
"Nesreen K. Ahmed",
"Eunyee Koh",
"Sungchul Kim",
"Anup Rao",
"Yasin Abbasi Yadkori"
] |
[] | 2018-01-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/press-space-to-fire-automatic-video-game
|
1805.11768
| null | null |
"Press Space to Fire": Automatic Video Game Tutorial Generation
|
We propose the problem of tutorial generation for games, i.e. to generate
tutorials which can teach players to play games, as an AI problem. This problem
can be approached in several ways, including generating natural language
descriptions of game rules, generating instructive game levels, and generating
demonstrations of how to play a game using agents that play in a human-like
manner. We further argue that the General Video Game AI framework provides a
useful testbed for addressing this problem.
| null |
http://arxiv.org/abs/1805.11768v1
|
http://arxiv.org/pdf/1805.11768v1.pdf
| null |
[
"Michael Cerny Green",
"Ahmed Khalifa",
"Gabriella A. B. Barros",
"Julian Togelius"
] |
[] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-walk-with-sgd
|
1802.08770
| null | null |
A Walk with SGD
|
We present novel empirical observations regarding how stochastic gradient
descent (SGD) navigates the loss landscape of over-parametrized deep neural
networks (DNNs). These observations expose the qualitatively different roles of
learning rate and batch-size in DNN optimization and generalization.
Specifically we study the DNN loss surface along the trajectory of SGD by
interpolating the loss surface between parameters from consecutive
\textit{iterations} and tracking various metrics during training. We find that
the loss interpolation between parameters before and after each training
iteration's update is roughly convex with a minimum (\textit{valley floor}) in
between for most of the training. Based on this and other metrics, we deduce
that for most of the training update steps, SGD moves in valley like regions of
the loss surface by jumping from one valley wall to another at a height above
the valley floor. This 'bouncing between walls at a height' mechanism helps SGD
traverse larger distance for small batch sizes and large learning rates which
we find play qualitatively different roles in the dynamics. While a large
learning rate maintains a large height from the valley floor, a small batch
size injects noise facilitating exploration. We find this mechanism is crucial
for generalization because the valley floor has barriers and this exploration
above the valley floor allows SGD to quickly travel far away from the
initialization point (without being affected by barriers) and find flatter
regions, corresponding to better generalization.
| null |
http://arxiv.org/abs/1802.08770v4
|
http://arxiv.org/pdf/1802.08770v4.pdf
| null |
[
"Chen Xing",
"Devansh Arpit",
"Christos Tsirigotis",
"Yoshua Bengio"
] |
[] | 2018-02-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112",
"description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))",
"full_name": "Stochastic Gradient Descent",
"introduced_year": 1951,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "SGD",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/adversarial-learning-of-task-oriented-neural
|
1805.11762
| null | null |
Adversarial Learning of Task-Oriented Neural Dialog Models
|
In this work, we propose an adversarial learning method for reward estimation
in reinforcement learning (RL) based task-oriented dialog models. Most of the
current RL based task-oriented dialog systems require the access to a reward
signal from either user feedback or user ratings. Such user ratings, however,
may not always be consistent or available in practice. Furthermore, online
dialog policy learning with RL typically requires a large number of queries to
users, suffering from sample efficiency problem. To address these challenges,
we propose an adversarial learning method to learn dialog rewards directly from
dialog samples. Such rewards are further used to optimize the dialog policy
with policy gradient based RL. In the evaluation in a restaurant search domain,
we show that the proposed adversarial dialog learning method achieves advanced
dialog success rate comparing to strong baseline methods. We further discuss
the covariate shift problem in online adversarial dialog learning and show how
we can address that with partial access to user feedback.
| null |
http://arxiv.org/abs/1805.11762v1
|
http://arxiv.org/pdf/1805.11762v1.pdf
|
WS 2018 7
|
[
"Bing Liu",
"Ian Lane"
] |
[
"Dialog Learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-30T00:00:00 |
https://aclanthology.org/W18-5041
|
https://aclanthology.org/W18-5041.pdf
|
adversarial-learning-of-task-oriented-neural-1
| null |
[] |
https://paperswithcode.com/paper/collaborative-learning-for-deep-neural
|
1805.11761
| null | null |
Collaborative Learning for Deep Neural Networks
|
We introduce collaborative learning in which multiple classifier heads of the
same network are simultaneously trained on the same training data to improve
generalization and robustness to label noise with no extra inference cost. It
acquires the strengths from auxiliary training, multi-task learning and
knowledge distillation. There are two important mechanisms involved in
collaborative learning. First, the consensus of multiple views from different
classifier heads on the same example provides supplementary information as well
as regularization to each classifier, thereby improving generalization. Second,
intermediate-level representation (ILR) sharing with backpropagation rescaling
aggregates the gradient flows from all heads, which not only reduces training
computational complexity, but also facilitates supervision to the shared
layers. The empirical results on CIFAR and ImageNet datasets demonstrate that
deep neural networks learned as a group in a collaborative way significantly
reduce the generalization error and increase the robustness to label noise.
| null |
http://arxiv.org/abs/1805.11761v2
|
http://arxiv.org/pdf/1805.11761v2.pdf
|
NeurIPS 2018 12
|
[
"Guocong Song",
"Wei Chai"
] |
[
"Knowledge Distillation",
"Multi-Task Learning"
] | 2018-05-30T00:00:00 |
http://papers.nips.cc/paper/7454-collaborative-learning-for-deep-neural-networks
|
http://papers.nips.cc/paper/7454-collaborative-learning-for-deep-neural-networks.pdf
|
collaborative-learning-for-deep-neural-1
| null |
[] |
https://paperswithcode.com/paper/data-driven-design-a-case-for-maximalist-game
|
1805.12475
| null | null |
Data-driven Design: A Case for Maximalist Game Design
|
Maximalism in art refers to drawing on and combining multiple different
sources for art creation, embracing the resulting collisions and heterogeneity.
This paper discusses the use of maximalism in game design and particularly in
data games, which are games that are generated partly based on open data. Using
Data Adventures, a series of generators that create adventure games from data
sources such as Wikipedia and OpenStreetMap, as a lens we explore several
tradeoffs and issues in maximalist game design. This includes the tension
between transformation and fidelity, between decorative and functional content,
and legal and ethical issues resulting from this type of generativity. This
paper sketches out the design space of maximalist data-driven games, a design
space that is mostly unexplored.
|
Maximalism in art refers to drawing on and combining multiple different sources for art creation, embracing the resulting collisions and heterogeneity.
|
http://arxiv.org/abs/1805.12475v1
|
http://arxiv.org/pdf/1805.12475v1.pdf
| null |
[
"Gabriella A. B. Barros",
"Michael Cerny Green",
"Antonios Liapis",
"Julian Togelius"
] |
[
"Game Design"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimal-testing-in-the-experiment-rich-regime
|
1805.11754
| null | null |
Optimal Testing in the Experiment-rich Regime
|
Motivated by the widespread adoption of large-scale A/B testing in industry,
we propose a new experimentation framework for the setting where potential
experiments are abundant (i.e., many hypotheses are available to test), and
observations are costly; we refer to this as the experiment-rich regime. Such
scenarios require the experimenter to internalize the opportunity cost of
assigning a sample to a particular experiment. We fully characterize the
optimal policy and give an algorithm to compute it. Furthermore, we develop a
simple heuristic that also provides intuition for the optimal policy. We use
simulations based on real data to compare both the optimal algorithm and the
heuristic to other natural alternative experimental design frameworks. In
particular, we discuss the paradox of power: high-powered classical tests can
lead to highly inefficient sampling in the experiment-rich regime.
|
Motivated by the widespread adoption of large-scale A/B testing in industry, we propose a new experimentation framework for the setting where potential experiments are abundant (i. e., many hypotheses are available to test), and observations are costly; we refer to this as the experiment-rich regime.
|
http://arxiv.org/abs/1805.11754v1
|
http://arxiv.org/pdf/1805.11754v1.pdf
| null |
[
"Sven Schmit",
"Virag Shah",
"Ramesh Johari"
] |
[
"Experimental Design"
] | 2018-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nengodl-combining-deep-learning-and
|
1805.11144
| null | null |
NengoDL: Combining deep learning and neuromorphic modelling methods
|
NengoDL is a software framework designed to combine the strengths of
neuromorphic modelling and deep learning. NengoDL allows users to construct
biologically detailed neural models, intermix those models with deep learning
elements (such as convolutional networks), and then efficiently simulate those
models in an easy-to-use, unified framework. In addition, NengoDL allows users
to apply deep learning training methods to optimize the parameters of
biological neural models. In this paper we present basic usage examples,
benchmarking, and details on the key implementation elements of NengoDL. More
details can be found at https://www.nengo.ai/nengo-dl .
|
NengoDL is a software framework designed to combine the strengths of neuromorphic modelling and deep learning.
|
http://arxiv.org/abs/1805.11144v3
|
http://arxiv.org/pdf/1805.11144v3.pdf
| null |
[
"Daniel Rasmussen"
] |
[
"Benchmarking",
"Deep Learning"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/real-valued-parametric-conditioning-of-an-rnn
|
1805.10808
| null | null |
Real-valued parametric conditioning of an RNN for interactive sound synthesis
|
A Recurrent Neural Network (RNN) for audio synthesis is trained by augmenting
the audio input with information about signal characteristics such as pitch,
amplitude, and instrument. The result after training is an audio synthesizer
that is played like a musical instrument with the desired musical
characteristics provided as continuous parametric control. The focus of this
paper is on conditioning data-driven synthesis models with real-valued
parameters, and in particular, on the ability of the system a) to generalize
and b) to be responsive to parameter values and sequences not seen during
training.
| null |
http://arxiv.org/abs/1805.10808v2
|
http://arxiv.org/pdf/1805.10808v2.pdf
| null |
[
"Lonce Wyse"
] |
[
"Audio Synthesis"
] | 2018-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semantic-road-layout-understanding-by
|
1805.11746
| null | null |
Semantic Road Layout Understanding by Generative Adversarial Inpainting
|
Autonomous driving is becoming a reality, yet vehicles still need to rely on
complex sensor fusion to understand the scene they act in. The ability to
discern static environment and dynamic entities provides a comprehension of the
road layout that poses constraints to the reasoning process about moving
objects. We pursue this through a GAN-based semantic segmentation inpainting
model to remove all dynamic objects from the scene and focus on understanding
its static components such as streets, sidewalks and buildings. We evaluate
this task on the Cityscapes dataset and on a novel synthetically generated
dataset obtained with the CARLA simulator and specifically designed to
quantitatively evaluate semantic segmentation inpaintings. We compare our
methods with a variety of baselines working both in the RGB and segmentation
domains.
| null |
http://arxiv.org/abs/1805.11746v2
|
http://arxiv.org/pdf/1805.11746v2.pdf
| null |
[
"Lorenzo Berlincioni",
"Federico Becattini",
"Leonardo Galteri",
"Lorenzo Seidenari",
"Alberto del Bimbo"
] |
[
"Autonomous Driving",
"Segmentation",
"Semantic Segmentation",
"Sensor Fusion"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100",
"description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\\pi\\left(a\\mid{s}\\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:\r\n\r\n$$H(X) = -\\sum\\pi\\left(x\\right)\\log\\left(\\pi\\left(x\\right)\\right) $$\r\n\r\nImage Credit: Wikipedia",
"full_name": "Entropy Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Entropy Regularization",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
},
{
"code_snippet_url": null,
"description": "**Proximal Policy Optimization**, or **PPO**, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization. \r\n\r\nLet $r\\_{t}\\left(\\theta\\right)$ denote the probability ratio $r\\_{t}\\left(\\theta\\right) = \\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}$, so $r\\left(\\theta\\_{old}\\right) = 1$. TRPO maximizes a “surrogate” objective:\r\n\r\n$$ L^{\\text{CPI}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)})\\hat{A}\\_{t}\\right] = \\hat{\\mathbb{E}}\\_{t}\\left[r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}\\right] $$\r\n\r\nWhere $CPI$ refers to a conservative policy iteration. Without a constraint, maximization of $L^{CPI}$ would lead to an excessively large policy update; hence, we PPO modifies the objective, to penalize changes to the policy that move $r\\_{t}\\left(\\theta\\right)$ away from 1:\r\n\r\n$$ J^{\\text{CLIP}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\min\\left(r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}, \\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}\\right)\\right] $$\r\n\r\nwhere $\\epsilon$ is a hyperparameter, say, $\\epsilon = 0.2$. The motivation for this objective is as follows. The first term inside the min is $L^{CPI}$. The second term, $\\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}$ modifies the surrogate\r\nobjective by clipping the probability ratio, which removes the incentive for moving $r\\_{t}$ outside of the interval $\\left[1 − \\epsilon, 1 + \\epsilon\\right]$. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. \r\n\r\nOne detail to note is that when we apply PPO for a network where we have shared parameters for actor and critic functions, we typically add to the objective function an error term on value estimation and an entropy term to encourage exploration.",
"full_name": "Proximal Policy Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "PPO",
"source_title": "Proximal Policy Optimization Algorithms",
"source_url": "http://arxiv.org/abs/1707.06347v2"
},
{
"code_snippet_url": "",
"description": "CARLA is an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. \r\n\r\nSource: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf)\r\n\r\nImage source: [Dosovitskiy et al.](https://arxiv.org/pdf/1711.03938v1.pdf)",
"full_name": "CARLA: An Open Urban Driving Simulator",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Video Game Models",
"parent": null
},
"name": "CARLA",
"source_title": "CARLA: An Open Urban Driving Simulator",
"source_url": "http://arxiv.org/abs/1711.03938v1"
}
] |
https://paperswithcode.com/paper/superpixel-enhanced-pairwise-conditional
|
1805.11737
| null | null |
Superpixel-enhanced Pairwise Conditional Random Field for Semantic Segmentation
|
Superpixel-based Higher-order Conditional Random Fields (CRFs) are effective
in enforcing long-range consistency in pixel-wise labeling problems, such as
semantic segmentation. However, their major short coming is considerably longer
time to learn higher-order potentials and extra hyperparameters and/or weights
compared with pairwise models. This paper proposes a superpixel-enhanced
pairwise CRF framework that consists of the conventional pairwise as well as
our proposed superpixel-enhanced pairwise (SP-Pairwise) potentials. SP-Pairwise
potentials incorporate the superpixel-based higher-order cues by conditioning
on a segment filtered image and share the same set of parameters as the
conventional pairwise potentials. Therefore, the proposed superpixel-enhanced
pairwise CRF has a lower time complexity in parameter learning and at the same
time it outperforms higher-order CRF in terms of inference accuracy. Moreover,
the new scheme takes advantage of the pre-trained pairwise models by reusing
their parameters and/or weights, which provides a significant accuracy boost on
the basis of CRF-RNN even without training. Experiments on MSRC-21 and PASCAL
VOC 2012 dataset confirm the effectiveness of our method.
|
This paper proposes a superpixel-enhanced pairwise CRF framework that consists of the conventional pairwise as well as our proposed superpixel-enhanced pairwise (SP-Pairwise) potentials.
|
http://arxiv.org/abs/1805.11737v1
|
http://arxiv.org/pdf/1805.11737v1.pdf
| null |
[
"Li Sulimowicz",
"Ishfaq Ahmad",
"Alexander Aved"
] |
[
"Semantic Segmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**CRF-RNN** is a formulation of a [CRF](https://paperswithcode.com/method/crf) as a Recurrent Neural Network. Specifically it formulates mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks.",
"full_name": "CRF-RNN",
"introduced_year": 2000,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "CRF-RNN",
"source_title": "Conditional Random Fields as Recurrent Neural Networks",
"source_url": "http://arxiv.org/abs/1502.03240v3"
},
{
"code_snippet_url": null,
"description": "**Conditional Random Fields** or **CRFs** are a type of probabilistic graph model that take neighboring sample context into account for tasks like classification. Prediction is modeled as a graphical model, which implements dependencies between the predictions. Graph choice depends on the application, for example linear chain CRFs are popular in natural language processing, whereas in image-based tasks, the graph would connect to neighboring locations in an image to enforce that they have similar predictions.\r\n\r\nImage Credit: [Charles Sutton and Andrew McCallum, An Introduction to Conditional Random Fields](https://homepages.inf.ed.ac.uk/csutton/publications/crftut-fnt.pdf)",
"full_name": "Conditional Random Field",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Structured Prediction** methods deal with structured outputs with multiple interdependent outputs. Below you can find a continuously updating list of structured prediction methods.",
"name": "Structured Prediction",
"parent": null
},
"name": "CRF",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/structural-isomprphism-in-mathematical
|
1805.12495
| null | null |
Invariant Representation of Mathematical Expressions
|
While there exist many methods in machine learning for comparison of letter string data, most are better equipped to handle strings that represent natural language, and their performance will not hold up when presented with strings that correspond to mathematical expressions. Based on the graphical representation of the expression tree, here we propose a simple method for encoding such expressions that is only sensitive to their structural properties, and invariant to the specifics which can vary between two seemingly different, but semantically similar mathematical expressions.
| null |
https://arxiv.org/abs/1805.12495v2
|
https://arxiv.org/pdf/1805.12495v2.pdf
| null |
[
"Reza Shahbazi"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learn-to-combine-modalities-in-multimodal
|
1805.11730
| null | null |
Learn to Combine Modalities in Multimodal Deep Learning
|
Combining complementary information from multiple modalities is intuitively
appealing for improving the performance of learning-based approaches. However,
it is challenging to fully leverage different modalities due to practical
challenges such as varying levels of noise and conflicts between modalities.
Existing methods do not adopt a joint approach to capturing synergies between
the modalities while simultaneously filtering noise and resolving conflicts on
a per sample basis. In this work we propose a novel deep neural network based
technique that multiplicatively combines information from different source
modalities. Thus the model training process automatically focuses on
information from more reliable modalities while reducing emphasis on the less
reliable modalities. Furthermore, we propose an extension that multiplicatively
combines not only the single-source modalities, but a set of mixtured source
modalities to better capture cross-modal signal correlations. We demonstrate
the effectiveness of our proposed technique by presenting empirical results on
three multimodal classification tasks from different domains. The results show
consistent accuracy improvements on all three tasks.
|
Combining complementary information from multiple modalities is intuitively appealing for improving the performance of learning-based approaches.
|
http://arxiv.org/abs/1805.11730v1
|
http://arxiv.org/pdf/1805.11730v1.pdf
| null |
[
"Kuan Liu",
"Yanen Li",
"Ning Xu",
"Prem Natarajan"
] |
[
"Deep Learning",
"Multimodal Deep Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/headon-real-time-reenactment-of-human
|
1805.11729
| null | null |
HeadOn: Real-time Reenactment of Human Portrait Videos
|
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.
| null |
http://arxiv.org/abs/1805.11729v1
|
http://arxiv.org/pdf/1805.11729v1.pdf
| null |
[
"Justus Thies",
"Michael Zollhöfer",
"Christian Theobalt",
"Marc Stamminger",
"Matthias Nießner"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/11k-hands-gender-recognition-and-biometric
|
1711.04322
| null | null |
11K Hands: Gender recognition and biometric identification using a large dataset of hand images
|
The human hand possesses distinctive features which can reveal gender
information. In addition, the hand is considered one of the primary biometric
traits used to identify a person. In this work, we propose a large dataset of
human hand images (dorsal and palmar sides) with detailed ground-truth
information for gender recognition and biometric identification. Using this
dataset, a convolutional neural network (CNN) can be trained effectively for
the gender recognition task. Based on this, we design a two-stream CNN to
tackle the gender recognition problem. This trained model is then used as a
feature extractor to feed a set of support vector machine classifiers for the
biometric identification task. We show that the dorsal side of hand images,
captured by a regular digital camera, convey effective distinctive features
similar to, if not better, those available in the palmar hand images. To
facilitate access to the proposed dataset and replication of our experiments,
the dataset, trained CNN models, and Matlab source code are available at
(https://goo.gl/rQJndd).
|
In this work, we propose a large dataset of human hand images (dorsal and palmar sides) with detailed ground-truth information for gender recognition and biometric identification.
|
http://arxiv.org/abs/1711.04322v9
|
http://arxiv.org/pdf/1711.04322v9.pdf
| null |
[
"Mahmoud Afifi"
] |
[
"Animal Pose Estimation",
"Object Detection"
] | 2017-11-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-consistency-of-compressive-spectral
|
1702.03522
| null | null |
On Consistency of Compressive Spectral Clustering
|
Spectral clustering is one of the most popular methods for community
detection in graphs. A key step in spectral clustering algorithms is the eigen
decomposition of the $n{\times}n$ graph Laplacian matrix to extract its $k$
leading eigenvectors, where $k$ is the desired number of clusters among $n$
objects. This is prohibitively complex to implement for very large datasets.
However, it has recently been shown that it is possible to bypass the eigen
decomposition by computing an approximate spectral embedding through graph
filtering of random signals. In this paper, we analyze the working of spectral
clustering performed via graph filtering on the stochastic block model.
Specifically, we characterize the effects of sparsity, dimensionality and
filter approximation error on the consistency of the algorithm in recovering
planted clusters.
| null |
http://arxiv.org/abs/1702.03522v3
|
http://arxiv.org/pdf/1702.03522v3.pdf
| null |
[
"Muni Sreenivas Pydi",
"Ambedkar Dukkipati"
] |
[
"Clustering",
"Community Detection",
"Stochastic Block Model"
] | 2017-02-12T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "Spectral clustering has attracted increasing attention due to\r\nthe promising ability in dealing with nonlinearly separable datasets [15], [16]. In spectral clustering, the spectrum of the graph Laplacian is used to reveal the cluster structure. The spectral clustering algorithm mainly consists of two steps: 1) constructs the low dimensional embedded representation of the data based on the eigenvectors of the graph Laplacian, 2) applies k-means on the constructed low dimensional data to obtain the clustering result. Thus,",
"full_name": "Spectral Clustering",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Clustering** methods cluster a dataset so that similar datapoints are located in the same group. Below you can find a continuously updating list of clustering methods.",
"name": "Clustering",
"parent": null
},
"name": "Spectral Clustering",
"source_title": "A Tutorial on Spectral Clustering",
"source_url": "http://arxiv.org/abs/0711.0189v1"
}
] |
https://paperswithcode.com/paper/random-mesh-projectors-for-inverse-problems
|
1805.11718
| null |
HyGcghRct7
|
Random mesh projectors for inverse problems
|
We propose a new learning-based approach to solve ill-posed inverse problems
in imaging. We address the case where ground truth training samples are rare
and the problem is severely ill-posed - both because of the underlying physics
and because we can only get few measurements. This setting is common in
geophysical imaging and remote sensing. We show that in this case the common
approach to directly learn the mapping from the measured data to the
reconstruction becomes unstable. Instead, we propose to first learn an ensemble
of simpler mappings from the data to projections of the unknown image into
random piecewise-constant subspaces. We then combine the projections to form a
final reconstruction by solving a deconvolution-like problem. We show
experimentally that the proposed method is more robust to measurement noise and
corruptions not seen during training than a directly learned inverse.
|
We show that in this case the common approach to directly learn the mapping from the measured data to the reconstruction becomes unstable.
|
http://arxiv.org/abs/1805.11718v3
|
http://arxiv.org/pdf/1805.11718v3.pdf
|
ICLR 2019 5
|
[
"Sidharth Gupta",
"Konik Kothari",
"Maarten V. de Hoop",
"Ivan Dokmanić"
] |
[] | 2018-05-29T00:00:00 |
https://openreview.net/forum?id=HyGcghRct7
|
https://openreview.net/pdf?id=HyGcghRct7
|
random-mesh-projectors-for-inverse-problems-1
| null |
[] |
https://paperswithcode.com/paper/deep-video-portraits
|
1805.11714
| null | null |
Deep Video Portraits
|
We present a novel approach that enables photo-realistic re-animation of
portrait videos using only an input video. In contrast to existing approaches
that are restricted to manipulations of facial expressions only, we are the
first to transfer the full 3D head position, head rotation, face expression,
eye gaze, and eye blinking from a source actor to a portrait video of a target
actor. The core of our approach is a generative neural network with a novel
space-time architecture. The network takes as input synthetic renderings of a
parametric face model, based on which it predicts photo-realistic video frames
for a given target actor. The realism in this rendering-to-video transfer is
achieved by careful adversarial training, and as a result, we can create
modified target videos that mimic the behavior of the synthetically-created
input. In order to enable source-to-target video re-animation, we render a
synthetic target video with the reconstructed head animation parameters from a
source video, and feed it into the trained network -- thus taking full control
of the target. With the ability to freely recombine source and target
parameters, we are able to demonstrate a large variety of video rewrite
applications without explicitly modeling hair, body or background. For
instance, we can reenact the full head using interactive user-controlled
editing, and realize high-fidelity visual dubbing. To demonstrate the high
quality of our output, we conduct an extensive series of experiments and
evaluations, where for instance a user study shows that our video edits are
hard to detect.
| null |
http://arxiv.org/abs/1805.11714v1
|
http://arxiv.org/pdf/1805.11714v1.pdf
| null |
[
"Hyeongwoo Kim",
"Pablo Garrido",
"Ayush Tewari",
"Weipeng Xu",
"Justus Thies",
"Matthias Nießner",
"Patrick Pérez",
"Christian Richardt",
"Michael Zollhöfer",
"Christian Theobalt"
] |
[
"Face Model"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-novel-multi-clustering-method-for
|
1805.11712
| null | null |
A Novel Multi-clustering Method for Hierarchical Clusterings, Based on Boosting
|
Bagging and boosting are proved to be the best methods of building multiple
classifiers in classification combination problems. In the area of "flat
clustering" problems, it is also recognized that multi-clustering methods based
on boosting provide clusterings of an improved quality. In this paper, we
introduce a novel multi-clustering method for "hierarchical clusterings" based
on boosting theory, which creates a more stable hierarchical clustering of a
dataset. The proposed algorithm includes a boosting iteration in which a
bootstrap of samples is created by weighted random sampling of elements from
the original dataset. A hierarchical clustering algorithm is then applied to
selected subsample to build a dendrogram which describes the hierarchy.
Finally, dissimilarity description matrices of multiple dendrogram results are
combined to a consensus one, using a hierarchical-clustering-combination
approach. Experiments on real popular datasets show that boosted method
provides superior quality solutions compared to standard hierarchical
clustering methods.
| null |
http://arxiv.org/abs/1805.11712v1
|
http://arxiv.org/pdf/1805.11712v1.pdf
| null |
[
"Elaheh Rashedi",
"Abdolreza Mirzaei"
] |
[
"Clustering"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/depth-and-nonlinearity-induce-implicit
|
1805.11711
| null | null |
Depth and nonlinearity induce implicit exploration for RL
|
The question of how to explore, i.e., take actions with uncertain outcomes to
learn about possible future rewards, is a key question in reinforcement
learning (RL). Here, we show a surprising result: We show that Q-learning with
nonlinear Q-function and no explicit exploration (i.e., a purely greedy policy)
can learn several standard benchmark tasks, including mountain car, equally
well as, or better than, the most commonly-used $\epsilon$-greedy exploration.
We carefully examine this result and show that both the depth of the Q-network
and the type of nonlinearity are important to induce such deterministic
exploration.
| null |
http://arxiv.org/abs/1805.11711v1
|
http://arxiv.org/pdf/1805.11711v1.pdf
| null |
[
"Justas Dauparas",
"Ryota Tomioka",
"Katja Hofmann"
] |
[
"Q-Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Q-Learning** is an off-policy temporal difference control algorithm:\r\n\r\n$$Q\\left(S\\_{t}, A\\_{t}\\right) \\leftarrow Q\\left(S\\_{t}, A\\_{t}\\right) + \\alpha\\left[R_{t+1} + \\gamma\\max\\_{a}Q\\left(S\\_{t+1}, a\\right) - Q\\left(S\\_{t}, A\\_{t}\\right)\\right] $$\r\n\r\nThe learned action-value function $Q$ directly approximates $q\\_{*}$, the optimal action-value function, independent of the policy being followed.\r\n\r\nSource: Sutton and Barto, Reinforcement Learning, 2nd Edition",
"full_name": "Q-Learning",
"introduced_year": 1984,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Off-Policy TD Control",
"parent": null
},
"name": "Q-Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/active-and-adaptive-sequential-learning
|
1805.11710
| null | null |
Active and Adaptive Sequential learning
|
A framework is introduced for actively and adaptively solving a sequence of
machine learning problems, which are changing in bounded manner from one time
step to the next. An algorithm is developed that actively queries the labels of
the most informative samples from an unlabeled data pool, and that adapts to
the change by utilizing the information acquired in the previous steps. Our
analysis shows that the proposed active learning algorithm based on stochastic
gradient descent achieves a near-optimal excess risk performance for maximum
likelihood estimation. Furthermore, an estimator of the change in the learning
problems using the active learning samples is constructed, which provides an
adaptive sample size selection rule that guarantees the excess risk is bounded
for sufficiently large number of time steps. Experiments with synthetic and
real data are presented to validate our algorithm and theoretical results.
| null |
http://arxiv.org/abs/1805.11710v1
|
http://arxiv.org/pdf/1805.11710v1.pdf
| null |
[
"Yuheng Bu",
"Jiaxun Lu",
"Venugopal V. Veeravalli"
] |
[
"Active Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/supervised-policy-update-for-deep
|
1805.11706
| null | null |
Supervised Policy Update for Deep Reinforcement Learning
|
We propose a new sample-efficient methodology, called Supervised Policy
Update (SPU), for deep reinforcement learning. Starting with data generated by
the current policy, SPU formulates and solves a constrained optimization
problem in the non-parameterized proximal policy space. Using supervised
regression, it then converts the optimal non-parameterized policy to a
parameterized policy, from which it draws new samples. The methodology is
general in that it applies to both discrete and continuous action spaces, and
can handle a wide variety of proximity constraints for the non-parameterized
optimization problem. We show how the Natural Policy Gradient and Trust Region
Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization
(PPO) problem can be addressed by this methodology. The SPU implementation is
much simpler than TRPO. In terms of sample efficiency, our extensive
experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and
outperforms PPO in Atari video game tasks.
|
We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology.
|
http://arxiv.org/abs/1805.11706v4
|
http://arxiv.org/pdf/1805.11706v4.pdf
|
ICLR 2019
|
[
"Quan Vuong",
"Yiming Zhang",
"Keith W. Ross"
] |
[
"Deep Reinforcement Learning",
"MuJoCo",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ikostrikov/pytorch-a3c/blob/48d95844755e2c3e2c7e48bbd1a7141f7212b63f/train.py#L100",
"description": "**Entropy Regularization** is a type of regularization used in [reinforcement learning](https://paperswithcode.com/methods/area/reinforcement-learning). For on-policy policy gradient based methods like [A3C](https://paperswithcode.com/method/a3c), the same mutual reinforcement behaviour leads to a highly-peaked $\\pi\\left(a\\mid{s}\\right)$ towards a few actions or action sequences, since it is easier for the actor and critic to overoptimise to a small portion of the environment. To reduce this problem, entropy regularization adds an entropy term to the loss to promote action diversity:\r\n\r\n$$H(X) = -\\sum\\pi\\left(x\\right)\\log\\left(\\pi\\left(x\\right)\\right) $$\r\n\r\nImage Credit: Wikipedia",
"full_name": "Entropy Regularization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Entropy Regularization",
"source_title": "Asynchronous Methods for Deep Reinforcement Learning",
"source_url": "http://arxiv.org/abs/1602.01783v2"
},
{
"code_snippet_url": null,
"description": "**Proximal Policy Optimization**, or **PPO**, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization. \r\n\r\nLet $r\\_{t}\\left(\\theta\\right)$ denote the probability ratio $r\\_{t}\\left(\\theta\\right) = \\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}$, so $r\\left(\\theta\\_{old}\\right) = 1$. TRPO maximizes a “surrogate” objective:\r\n\r\n$$ L^{\\text{CPI}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)})\\hat{A}\\_{t}\\right] = \\hat{\\mathbb{E}}\\_{t}\\left[r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}\\right] $$\r\n\r\nWhere $CPI$ refers to a conservative policy iteration. Without a constraint, maximization of $L^{CPI}$ would lead to an excessively large policy update; hence, we PPO modifies the objective, to penalize changes to the policy that move $r\\_{t}\\left(\\theta\\right)$ away from 1:\r\n\r\n$$ J^{\\text{CLIP}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\min\\left(r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}, \\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}\\right)\\right] $$\r\n\r\nwhere $\\epsilon$ is a hyperparameter, say, $\\epsilon = 0.2$. The motivation for this objective is as follows. The first term inside the min is $L^{CPI}$. The second term, $\\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}$ modifies the surrogate\r\nobjective by clipping the probability ratio, which removes the incentive for moving $r\\_{t}$ outside of the interval $\\left[1 − \\epsilon, 1 + \\epsilon\\right]$. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. \r\n\r\nOne detail to note is that when we apply PPO for a network where we have shared parameters for actor and critic functions, we typically add to the objective function an error term on value estimation and an entropy term to encourage exploration.",
"full_name": "Proximal Policy Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "PPO",
"source_title": "Proximal Policy Optimization Algorithms",
"source_url": "http://arxiv.org/abs/1707.06347v2"
},
{
"code_snippet_url": null,
"description": "**Trust Region Policy Optimization**, or **TRPO**, is a policy gradient method in reinforcement learning that avoids parameter updates that change the policy too much with a KL divergence constraint on the size of the policy update at each iteration.\r\n\r\nTake the case of off-policy reinforcement learning, where the policy $\\beta$ for collecting trajectories on rollout workers is different from the policy $\\pi$ to optimize for. The objective function in an off-policy model measures the total advantage over the state visitation distribution and actions, while the mismatch between the training data distribution and the true policy state distribution is compensated with an importance sampling estimator:\r\n\r\n$$ J\\left(\\theta\\right) = \\sum\\_{s\\in{S}}p^{\\pi\\_{\\theta\\_{old}}}\\sum\\_{a\\in\\mathcal{A}}\\left(\\pi\\_{\\theta}\\left(a\\mid{s}\\right)\\hat{A}\\_{\\theta\\_{old}}\\left(s, a\\right)\\right) $$\r\n\r\n$$ J\\left(\\theta\\right) = \\sum\\_{s\\in{S}}p^{\\pi\\_{\\theta\\_{old}}}\\sum\\_{a\\in\\mathcal{A}}\\left(\\beta\\left(a\\mid{s}\\right)\\frac{\\pi\\_{\\theta}\\left(a\\mid{s}\\right)}{\\beta\\left(a\\mid{s}\\right)}\\hat{A}\\_{\\theta\\_{old}}\\left(s, a\\right)\\right) $$\r\n\r\n$$ J\\left(\\theta\\right) = \\mathbb{E}\\_{s\\sim{p}^{\\pi\\_{\\theta\\_{old}}}, a\\sim{\\beta}} \\left(\\frac{\\pi\\_{\\theta}\\left(a\\mid{s}\\right)}{\\beta\\left(a\\mid{s}\\right)}\\hat{A}\\_{\\theta\\_{old}}\\left(s, a\\right)\\right)$$\r\n\r\nWhen training on policy, theoretically the policy for collecting data is same as the policy that we want to optimize. However, when rollout workers and optimizers are running in parallel asynchronously, the behavior policy can get stale. TRPO considers this subtle difference: It labels the behavior policy as $\\pi\\_{\\theta\\_{old}}\\left(a\\mid{s}\\right)$ and thus the objective function becomes:\r\n\r\n$$ J\\left(\\theta\\right) = \\mathbb{E}\\_{s\\sim{p}^{\\pi\\_{\\theta\\_{old}}}, a\\sim{\\pi\\_{\\theta\\_{old}}}} \\left(\\frac{\\pi\\_{\\theta}\\left(a\\mid{s}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\mid{s}\\right)}\\hat{A}\\_{\\theta\\_{old}}\\left(s, a\\right)\\right)$$\r\n\r\nTRPO aims to maximize the objective function $J\\left(\\theta\\right)$ subject to a trust region constraint which enforces the distance between old and new policies measured by KL-divergence to be small enough, within a parameter $\\delta$:\r\n\r\n$$ \\mathbb{E}\\_{s\\sim{p}^{\\pi\\_{\\theta\\_{old}}}} \\left[D\\_{KL}\\left(\\pi\\_{\\theta\\_{old}}\\left(.\\mid{s}\\right)\\mid\\mid\\pi\\_{\\theta}\\left(.\\mid{s}\\right)\\right)\\right] \\leq \\delta$$",
"full_name": "Trust Region Policy Optimization",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.",
"name": "Policy Gradient Methods",
"parent": null
},
"name": "TRPO",
"source_title": "Trust Region Policy Optimization",
"source_url": "http://arxiv.org/abs/1502.05477v5"
}
] |
https://paperswithcode.com/paper/deep-semantic-architecture-with
|
1805.11704
| null | null |
Deep Semantic Architecture with discriminative feature visualization for neuroimage analysis
|
Neuroimaging data analysis often involves \emph{a-priori} selection of data
features to study the underlying neural activity. Since this could lead to
sub-optimal feature selection and thereby prevent the detection of subtle
patterns in neural activity, data-driven methods have recently gained
popularity for optimizing neuroimaging data analysis pipelines and thereby,
improving our understanding of neural mechanisms. In this context, we developed
a deep convolutional architecture that can identify discriminating patterns in
neuroimaging data and applied it to electroencephalography (EEG) recordings
collected from 25 subjects performing a hand motor task before and after a rest
period or a bout of exercise. The deep network was trained to classify subjects
into exercise and control groups based on differences in their EEG signals.
Subsequently, we developed a novel method termed the cue-combination for Class
Activation Map (ccCAM), which enabled us to identify discriminating
spatio-temporal features within definite frequency bands (23--33 Hz) and assess
the effects of exercise on the brain. Additionally, the proposed architecture
allowed the visualization of the differences in the propagation of underlying
neural activity across the cortex between the two groups, for the first time in
our knowledge. Our results demonstrate the feasibility of using deep network
architectures for neuroimaging analysis in different contexts such as, for the
identification of robust brain biomarkers to better characterize and
potentially treat neurological disorders.
| null |
http://arxiv.org/abs/1805.11704v2
|
http://arxiv.org/pdf/1805.11704v2.pdf
| null |
[
"Arna Ghosh",
"Fabien dal Maso",
"Marc Roig",
"Georgios D Mitsis",
"Marie-Hélène Boudrias"
] |
[
"EEG",
"Electroencephalogram (EEG)",
"feature selection"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/channel-gating-neural-networks
|
1805.12549
| null | null |
Channel Gating Neural Networks
|
This paper introduces channel gating, a dynamic, fine-grained, and hardware-efficient pruning scheme to reduce the computation cost for convolutional neural networks (CNNs). Channel gating identifies regions in the features that contribute less to the classification result, and skips the computation on a subset of the input channels for these ineffective regions. Unlike static network pruning, channel gating optimizes CNN inference at run-time by exploiting input-specific characteristics, which allows substantially reducing the compute cost with almost no accuracy loss. We experimentally show that applying channel gating in state-of-the-art networks achieves 2.7-8.0$\times$ reduction in floating-point operations (FLOPs) and 2.0-4.4$\times$ reduction in off-chip memory accesses with a minimal accuracy loss on CIFAR-10. Combining our method with knowledge distillation reduces the compute cost of ResNet-18 by 2.6$\times$ without accuracy drop on ImageNet. We further demonstrate that channel gating can be realized in hardware efficiently. Our approach exhibits sparsity patterns that are well-suited to dense systolic arrays with minimal additional hardware. We have designed an accelerator for channel gating networks, which can be implemented using either FPGAs or ASICs. Running a quantized ResNet-18 model for ImageNet, our accelerator achieves an encouraging speedup of 2.4$\times$ on average, with a theoretical FLOP reduction of 2.8$\times$.
|
Combining our method with knowledge distillation reduces the compute cost of ResNet-18 by 2. 6$\times$ without accuracy drop on ImageNet.
|
https://arxiv.org/abs/1805.12549v2
|
https://arxiv.org/pdf/1805.12549v2.pdf
|
NeurIPS 2019 12
|
[
"Weizhe Hua",
"Yuan Zhou",
"Christopher De Sa",
"Zhiru Zhang",
"G. Edward Suh"
] |
[
"Knowledge Distillation",
"Network Pruning"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8464-channel-gating-neural-networks
|
http://papers.nips.cc/paper/8464-channel-gating-neural-networks.pdf
|
channel-gating-neural-networks-1
| null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/",
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
}
] |
https://paperswithcode.com/paper/can-dnns-learn-to-lipread-full-sentences
|
1805.11685
| null | null |
Can DNNs Learn to Lipread Full Sentences?
|
Finding visual features and suitable models for lipreading tasks that are
more complex than a well-constrained vocabulary has proven challenging. This
paper explores state-of-the-art Deep Neural Network architectures for
lipreading based on a Sequence to Sequence Recurrent Neural Network. We report
results for both hand-crafted and 2D/3D Convolutional Neural Network visual
front-ends, online monotonic attention, and a joint Connectionist Temporal
Classification-Sequence-to-Sequence loss. The system is evaluated on the
publicly available TCD-TIMIT dataset, with 59 speakers and a vocabulary of over
6000 words. Results show a major improvement on a Hidden Markov Model
framework. A fuller analysis of performance across visemes demonstrates that
the network is not only learning the language model, but actually learning to
lipread.
| null |
http://arxiv.org/abs/1805.11685v1
|
http://arxiv.org/pdf/1805.11685v1.pdf
| null |
[
"George Sterpu",
"Christian Saam",
"Naomi Harte"
] |
[
"General Classification",
"Language Modeling",
"Language Modelling",
"Lipreading"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/efficient-graph-based-word-sense-induction-by
|
1804.03257
| null | null |
Efficient Graph-based Word Sense Induction by Distributional Inclusion Vector Embeddings
|
Word sense induction (WSI), which addresses polysemy by unsupervised
discovery of multiple word senses, resolves ambiguities for downstream NLP
tasks and also makes word representations more interpretable. This paper
proposes an accurate and efficient graph-based method for WSI that builds a
global non-negative vector embedding basis (which are interpretable like
topics) and clusters the basis indexes in the ego network of each polysemous
word. By adopting distributional inclusion vector embeddings as our basis
formation model, we avoid the expensive step of nearest neighbor search that
plagues other graph-based methods without sacrificing the quality of sense
clusters. Experiments on three datasets show that our proposed method produces
similar or better sense clusters and embeddings compared with previous
state-of-the-art methods while being significantly more efficient.
| null |
http://arxiv.org/abs/1804.03257v2
|
http://arxiv.org/pdf/1804.03257v2.pdf
|
WS 2018 6
|
[
"Haw-Shiuan Chang",
"Amol Agrawal",
"Ananya Ganesh",
"Anirudha Desai",
"Vinayak Mathur",
"Alfred Hough",
"Andrew McCallum"
] |
[
"Word Sense Induction"
] | 2018-04-09T00:00:00 |
https://aclanthology.org/W18-1706
|
https://aclanthology.org/W18-1706.pdf
|
efficient-graph-based-word-sense-induction-by-1
| null |
[] |
https://paperswithcode.com/paper/probabilistic-trajectory-segmentation-by
|
1806.06063
| null | null |
Probabilistic Trajectory Segmentation by Means of Hierarchical Dirichlet Process Switching Linear Dynamical Systems
|
Using movement primitive libraries is an effective means to enable robots to solve more complex tasks. In order to build these movement libraries, current algorithms require a prior segmentation of the demonstration trajectories. A promising approach is to model the trajectory as being generated by a set of Switching Linear Dynamical Systems and inferring a meaningful segmentation by inspecting the transition points characterized by the switching dynamics. With respect to the learning, a nonparametric Bayesian approach is employed utilizing a Gibbs sampler.
|
Using movement primitive libraries is an effective means to enable robots to solve more complex tasks.
|
https://arxiv.org/abs/1806.06063v3
|
https://arxiv.org/pdf/1806.06063v3.pdf
| null |
[
"Maximilian Sieb",
"Matthias Schultheis",
"Sebastian Szelag",
"Rudolf Lioutikov",
"Jan Peters"
] |
[
"Segmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributional-inclusion-vector-embedding-for
|
1710.00880
| null | null |
Distributional Inclusion Vector Embedding for Unsupervised Hypernymy Detection
|
Modeling hypernymy, such as poodle is-a dog, is an important generalization
aid to many NLP tasks, such as entailment, coreference, relation extraction,
and question answering. Supervised learning from labeled hypernym sources, such
as WordNet, limits the coverage of these models, which can be addressed by
learning hypernyms from unlabeled text. Existing unsupervised methods either do
not scale to large vocabularies or yield unacceptably poor accuracy. This paper
introduces distributional inclusion vector embedding (DIVE), a
simple-to-implement unsupervised method of hypernym discovery via per-word
non-negative vector embeddings which preserve the inclusion property of word
contexts in a low-dimensional and interpretable space. In experimental
evaluations more comprehensive than any previous literature of which we are
aware-evaluating on 11 datasets using multiple existing as well as newly
proposed scoring functions-we find that our method provides up to double the
precision of previous unsupervised embeddings, and the highest average
performance, using a much more compact word representation, and yielding many
new state-of-the-art results.
| null |
http://arxiv.org/abs/1710.00880v3
|
http://arxiv.org/pdf/1710.00880v3.pdf
|
NAACL 2018 6
|
[
"Haw-Shiuan Chang",
"ZiYun Wang",
"Luke Vilnis",
"Andrew McCallum"
] |
[
"Hypernym Discovery",
"Question Answering",
"Relation Extraction"
] | 2017-10-02T00:00:00 |
https://aclanthology.org/N18-1045
|
https://aclanthology.org/N18-1045.pdf
|
distributional-inclusion-vector-embedding-for-1
| null |
[] |
https://paperswithcode.com/paper/a-unified-particle-optimization-framework-for
|
1805.11659
| null | null |
A Unified Particle-Optimization Framework for Scalable Bayesian Sampling
|
There has been recent interest in developing scalable Bayesian sampling
methods such as stochastic gradient MCMC (SG-MCMC) and Stein variational
gradient descent (SVGD) for big-data analysis. A standard SG-MCMC algorithm
simulates samples from a discrete-time Markov chain to approximate a target
distribution, thus samples could be highly correlated, an undesired property
for SG-MCMC. In contrary, SVGD directly optimizes a set of particles to
approximate a target distribution, and thus is able to obtain good
approximations with relatively much fewer samples. In this paper, we propose a
principle particle-optimization framework based on Wasserstein gradient flows
to unify SG-MCMC and SVGD, and to allow new algorithms to be developed. Our
framework interprets SG-MCMC as particle optimization on the space of
probability measures, revealing a strong connection between SG-MCMC and SVGD.
The key component of our framework is several particle-approximate techniques
to efficiently solve the original partial differential equations on the space
of probability measures. Extensive experiments on both synthetic data and deep
neural networks demonstrate the effectiveness and efficiency of our framework
for scalable Bayesian sampling.
| null |
http://arxiv.org/abs/1805.11659v2
|
http://arxiv.org/pdf/1805.11659v2.pdf
| null |
[
"Changyou Chen",
"Ruiyi Zhang",
"Wenlin Wang",
"Bai Li",
"Liqun Chen"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lstms-exploit-linguistic-attributes-of-data
|
1805.11653
| null | null |
LSTMs Exploit Linguistic Attributes of Data
|
While recurrent neural networks have found success in a variety of natural
language processing applications, they are general models of sequential data.
We investigate how the properties of natural language data affect an LSTM's
ability to learn a nonlinguistic task: recalling elements from its input. We
find that models trained on natural language data are able to recall tokens
from much longer sequences than models trained on non-language sequential data.
Furthermore, we show that the LSTM learns to solve the memorization task by
explicitly using a subset of its neurons to count timesteps in the input. We
hypothesize that the patterns and structure in natural language data enable
LSTMs to learn by providing approximate ways of reducing loss, but
understanding the effect of different training data on the learnability of
LSTMs remains an open question.
| null |
http://arxiv.org/abs/1805.11653v2
|
http://arxiv.org/pdf/1805.11653v2.pdf
|
WS 2018 7
|
[
"Nelson F. Liu",
"Omer Levy",
"Roy Schwartz",
"Chenhao Tan",
"Noah A. Smith"
] |
[
"Memorization",
"Open-Ended Question Answering"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/W18-3024
|
https://aclanthology.org/W18-3024.pdf
|
lstms-exploit-linguistic-attributes-of-data-1
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/teaching-meaningful-explanations
|
1805.11648
| null | null |
Teaching Meaningful Explanations
|
The adoption of machine learning in high-stakes applications such as
healthcare and law has lagged in part because predictions are not accompanied
by explanations comprehensible to the domain user, who often holds the ultimate
responsibility for decisions and outcomes. In this paper, we propose an
approach to generate such explanations in which training data is augmented to
include, in addition to features and labels, explanations elicited from domain
users. A joint model is then learned to produce both labels and explanations
from the input features. This simple idea ensures that explanations are
tailored to the complexity expectations and domain knowledge of the consumer.
Evaluation spans multiple modeling techniques on a game dataset, a (visual)
aesthetics dataset, a chemical odor dataset and a Melanoma dataset showing that
our approach is generalizable across domains and algorithms. Results
demonstrate that meaningful explanations can be reliably taught to machine
learning algorithms, and in some cases, also improve modeling accuracy.
| null |
http://arxiv.org/abs/1805.11648v2
|
http://arxiv.org/pdf/1805.11648v2.pdf
| null |
[
"Noel C. F. Codella",
"Michael Hind",
"Karthikeyan Natesan Ramamurthy",
"Murray Campbell",
"Amit Dhurandhar",
"Kush R. Varshney",
"Dennis Wei",
"Aleksandra Mojsilovic"
] |
[
"BIG-bench Machine Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/high-dimensional-robust-sparse-regression
|
1805.11643
| null | null |
High Dimensional Robust Sparse Regression
|
We provide a novel -- and to the best of our knowledge, the first -- algorithm for high dimensional sparse regression with constant fraction of corruptions in explanatory and/or response variables. Our algorithm recovers the true sparse parameters with sub-linear sample complexity, in the presence of a constant fraction of arbitrary corruptions. Our main contribution is a robust variant of Iterative Hard Thresholding. Using this, we provide accurate estimators: when the covariance matrix in sparse regression is identity, our error guarantee is near information-theoretically optimal. We then deal with robust sparse regression with unknown structured covariance matrix. We propose a filtering algorithm which consists of a novel randomized outlier removal technique for robust sparse mean estimation that may be of interest in its own right: the filtering algorithm is flexible enough to deal with unknown covariance. Also, it is orderwise more efficient computationally than the ellipsoid algorithm. Using sub-linear sample complexity, our algorithm achieves the best known (and first) error guarantee. We demonstrate the effectiveness on large-scale sparse regression problems with arbitrary corruptions.
| null |
https://arxiv.org/abs/1805.11643v3
|
https://arxiv.org/pdf/1805.11643v3.pdf
| null |
[
"Liu Liu",
"Yanyao Shen",
"Tianyang Li",
"Constantine Caramanis"
] |
[
"regression",
"Vocal Bursts Intensity Prediction"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/why-are-big-data-matrices-approximately-low
|
1705.07474
| null | null |
Why are Big Data Matrices Approximately Low Rank?
|
Matrices of (approximate) low rank are pervasive in data science, appearing
in recommender systems, movie preferences, topic models, medical records, and
genomics. While there is a vast literature on how to exploit low rank structure
in these datasets, there is less attention on explaining why the low rank
structure appears in the first place. Here, we explain the effectiveness of low
rank models in data science by considering a simple generative model for these
matrices: we suppose that each row or column is associated to a (possibly high
dimensional) bounded latent variable, and entries of the matrix are generated
by applying a piecewise analytic function to these latent variables. These
matrices are in general full rank. However, we show that we can approximate
every entry of an $m \times n$ matrix drawn from this model to within a fixed
absolute error by a low rank matrix whose rank grows as $\mathcal O(\log(m +
n))$. Hence any sufficiently large matrix from such a latent variable model can
be approximated, up to a small entrywise error, by a low rank matrix.
| null |
http://arxiv.org/abs/1705.07474v2
|
http://arxiv.org/pdf/1705.07474v2.pdf
| null |
[
"Madeleine Udell",
"Alex Townsend"
] |
[
"Recommendation Systems",
"Topic Models"
] | 2017-05-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deep-learning-under-privileged-information
|
1805.11614
| null | null |
Deep Learning under Privileged Information Using Heteroscedastic Dropout
|
Unlike machines, humans learn through rapid, abstract model-building. The
role of a teacher is not simply to hammer home right or wrong answers, but
rather to provide intuitive comments, comparisons, and explanations to a pupil.
This is what the Learning Under Privileged Information (LUPI) paradigm
endeavors to model by utilizing extra knowledge only available during training.
We propose a new LUPI algorithm specifically designed for Convolutional Neural
Networks (CNNs) and Recurrent Neural Networks (RNNs). We propose to use a
heteroscedastic dropout (i.e. dropout with a varying variance) and make the
variance of the dropout a function of privileged information. Intuitively, this
corresponds to using the privileged information to control the uncertainty of
the model output. We perform experiments using CNNs and RNNs for the tasks of
image classification and machine translation. Our method significantly
increases the sample efficiency during learning, resulting in higher accuracy
with a large margin when the number of training examples is limited. We also
theoretically justify the gains in sample efficiency by providing a
generalization error bound decreasing with $O(\frac{1}{n})$, where $n$ is the
number of training examples, in an oracle case.
|
This is what the Learning Under Privileged Information (LUPI) paradigm endeavors to model by utilizing extra knowledge only available during training.
|
http://arxiv.org/abs/1805.11614v1
|
http://arxiv.org/pdf/1805.11614v1.pdf
|
CVPR 2018 6
|
[
"John Lambert",
"Ozan Sener",
"Silvio Savarese"
] |
[
"Deep Learning",
"image-classification",
"Image Classification",
"Machine Translation",
"Translation"
] | 2018-05-29T00:00:00 |
http://openaccess.thecvf.com/content_cvpr_2018/html/Lambert_Deep_Learning_Under_CVPR_2018_paper.html
|
http://openaccess.thecvf.com/content_cvpr_2018/papers/Lambert_Deep_Learning_Under_CVPR_2018_paper.pdf
|
deep-learning-under-privileged-information-1
| null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
}
] |
https://paperswithcode.com/paper/recurrent-residual-convolutional-neural
|
1802.06955
| null | null |
Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation
|
Deep learning (DL) based semantic segmentation methods have been providing
state-of-the-art performance in the last few years. More specifically, these
techniques have been successfully applied to medical image classification,
segmentation, and detection tasks. One deep learning technique, U-Net, has
become one of the most popular for these applications. In this paper, we
propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well
as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net
models, which are named RU-Net and R2U-Net respectively. The proposed models
utilize the power of U-Net, Residual Network, as well as RCNN. There are
several advantages of these proposed architectures for segmentation tasks.
First, a residual unit helps when training deep architecture. Second, feature
accumulation with recurrent residual convolutional layers ensures better
feature representation for segmentation tasks. Third, it allows us to design
better U-Net architecture with same number of network parameters with better
performance for medical image segmentation. The proposed models are tested on
three benchmark datasets such as blood vessel segmentation in retina images,
skin cancer segmentation, and lung lesion segmentation. The experimental
results show superior performance on segmentation tasks compared to equivalent
models including U-Net and residual U-Net (ResU-Net).
|
In this paper, we propose a Recurrent Convolutional Neural Network (RCNN) based on U-Net as well as a Recurrent Residual Convolutional Neural Network (RRCNN) based on U-Net models, which are named RU-Net and R2U-Net respectively.
|
http://arxiv.org/abs/1802.06955v5
|
http://arxiv.org/pdf/1802.06955v5.pdf
| null |
[
"Md Zahangir Alom",
"Mahmudul Hasan",
"Chris Yakopcic",
"Tarek M. Taha",
"Vijayan K. Asari"
] |
[
"image-classification",
"Image Classification",
"Image Segmentation",
"Lesion Segmentation",
"Lung Nodule Segmentation",
"Medical Image Classification",
"Medical Image Segmentation",
"Retinal Vessel Segmentation",
"Segmentation",
"Semantic Segmentation",
"Skin Cancer Segmentation"
] | 2018-02-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/densenet.py#L113",
"description": "A **Concatenated Skip Connection** is a type of skip connection that seeks to reuse features by concatenating them to new layers, allowing more information to be retained from previous layers of the network. This contrasts with say, residual connections, where element-wise summation is used instead to incorporate information from previous layers. This type of skip connection is prominently used in DenseNets (and also Inception networks), which the Figure to the right illustrates.",
"full_name": "Concatenated Skip Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Concatenated Skip Connection",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/milesial/Pytorch-UNet/blob/67bf11b4db4c5f2891bd7e8e7f58bcde8ee2d2db/unet/unet_model.py#L8",
"description": "**U-Net** is an architecture for semantic segmentation. It consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. It consists of the repeated application of two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit ([ReLU](https://paperswithcode.com/method/relu)) and a 2x2 [max pooling](https://paperswithcode.com/method/max-pooling) operation with stride 2 for downsampling. At each downsampling step we double the number of feature channels. Every step in the expansive path consists of an upsampling of the feature map followed by a 2x2 [convolution](https://paperswithcode.com/method/convolution) (“up-convolution”) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in every convolution. At the final layer a [1x1 convolution](https://paperswithcode.com/method/1x1-convolution) is used to map each 64-component feature vector to the desired number of classes. In total the network has 23 convolutional layers.\r\n\r\n[Original MATLAB Code](https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/u-net-release-2015-10-02.tar.gz)",
"full_name": "U-Net",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ",
"name": "Semantic Segmentation Models",
"parent": null
},
"name": "U-Net",
"source_title": "U-Net: Convolutional Networks for Biomedical Image Segmentation",
"source_url": "http://arxiv.org/abs/1505.04597v1"
}
] |
https://paperswithcode.com/paper/semantically-informed-distance-and-similarity
|
1805.11611
| null | null |
Semantically-informed distance and similarity measures for paraphrase plagiarism identification
|
Paraphrase plagiarism identification represents a very complex task given
that plagiarized texts are intentionally modified through several rewording
techniques. Accordingly, this paper introduces two new measures for evaluating
the relatedness of two given texts: a semantically-informed similarity measure
and a semantically-informed edit distance. Both measures are able to extract
semantic information from either an external resource or a distributed
representation of words, resulting in informative features for training a
supervised classifier for detecting paraphrase plagiarism. Obtained results
indicate that the proposed metrics are consistently good in detecting different
types of paraphrase plagiarism. In addition, results are very competitive
against state-of-the art methods having the advantage of representing a much
more simple but equally effective solution.
| null |
http://arxiv.org/abs/1805.11611v1
|
http://arxiv.org/pdf/1805.11611v1.pdf
| null |
[
"Miguel A. Álvarez-Carmona",
"Marc Franco-Salvador",
"Esaú Villatoro-Tello",
"Manuel Montes-y-Gómez",
"Paolo Rosso",
"Luis Villaseñor-Pineda"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-identification-of-arabic
|
1805.11603
| null | null |
Automatic Identification of Arabic expressions related to future events in Lebanon's economy
|
In this paper, we propose a method to automatically identify future events in
Lebanon's economy from Arabic texts. Challenges are threefold: first, we need
to build a corpus of Arabic texts that covers Lebanon's economy; second, we
need to study how future events are expressed linguistically in these texts;
and third, we need to automatically identify the relevant textual segments
accordingly. We will validate this method on a constructed corpus form the web
and show that it has very promising results. To do so, we will be using SLCSAS,
a system for semantic analysis, based on the Contextual Explorer method, and
"AlKhalil Morpho Sys" system for morpho-syntactic analysis.
| null |
http://arxiv.org/abs/1805.11603v1
|
http://arxiv.org/pdf/1805.11603v1.pdf
| null |
[
"Moustafa Al-Hajj",
"Amani Sabra"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/stable-recurrent-models
|
1805.10369
| null |
Hygxb2CqKm
|
Stable Recurrent Models
|
Stability is a fundamental property of dynamical systems, yet to this date it
has had little bearing on the practice of recurrent neural networks. In this
work, we conduct a thorough investigation of stable recurrent models.
Theoretically, we prove stable recurrent neural networks are well approximated
by feed-forward networks for the purpose of both inference and training by
gradient descent. Empirically, we demonstrate stable recurrent models often
perform as well as their unstable counterparts on benchmark sequence tasks.
Taken together, these findings shed light on the effective power of recurrent
networks and suggest much of sequence learning happens, or can be made to
happen, in the stable regime. Moreover, our results help to explain why in many
cases practitioners succeed in replacing recurrent models by feed-forward
models.
| null |
http://arxiv.org/abs/1805.10369v4
|
http://arxiv.org/pdf/1805.10369v4.pdf
|
ICLR 2019 5
|
[
"John Miller",
"Moritz Hardt"
] |
[] | 2018-05-25T00:00:00 |
https://openreview.net/forum?id=Hygxb2CqKm
|
https://openreview.net/pdf?id=Hygxb2CqKm
|
stable-recurrent-models-1
| null |
[] |
https://paperswithcode.com/paper/adapternet-learning-input-transformation-for
|
1805.11601
| null | null |
AdapterNet - learning input transformation for domain adaptation
|
Deep neural networks have demonstrated impressive performance in various
machine learning tasks. However, they are notoriously sensitive to changes in
data distribution. Often, even a slight change in the distribution can lead to
drastic performance reduction. Artificially augmenting the data may help to
some extent, but in most cases, fails to achieve model invariance to the data
distribution. Some examples where this sub-class of domain adaptation can be
valuable are various imaging modalities such as thermal imaging, X-ray,
ultrasound, and MRI, where changes in acquisition parameters or acquisition
device manufacturer will result in a different representation of the same
input. Our work shows that standard fine-tuning fails to adapt the model in
certain important cases. We propose a novel method of adapting to a new data
source, and demonstrate near perfect adaptation on a customized ImageNet
benchmark. Moreover, our method does not require any samples from the original
data set, it is completely explainable and can be tailored to the task.
| null |
http://arxiv.org/abs/1805.11601v2
|
http://arxiv.org/pdf/1805.11601v2.pdf
| null |
[
"Alon Hazan",
"Yoel Shoshan",
"Daniel Khapun",
"Roy Aladjem",
"Vadim Ratner"
] |
[
"Domain Adaptation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/polyglot-semantic-role-labeling
|
1805.11598
| null | null |
Polyglot Semantic Role Labeling
|
Previous approaches to multilingual semantic dependency parsing treat
languages independently, without exploiting the similarities between semantic
structures across languages. We experiment with a new approach where we combine
resources from a pair of languages in the CoNLL 2009 shared task to build a
polyglot semantic role labeler. Notwithstanding the absence of parallel data,
and the dissimilarity in annotations between languages, our approach results in
an improvement in SRL performance on multiple languages over a monolingual
baseline. Analysis of the polyglot model shows it to be advantageous in
lower-resource settings.
| null |
http://arxiv.org/abs/1805.11598v1
|
http://arxiv.org/pdf/1805.11598v1.pdf
|
ACL 2018 7
|
[
"Phoebe Mulcaire",
"Swabha Swayamdipta",
"Noah Smith"
] |
[
"Dependency Parsing",
"Semantic Dependency Parsing",
"Semantic Role Labeling"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/P18-2106
|
https://aclanthology.org/P18-2106.pdf
|
polyglot-semantic-role-labeling-1
| null |
[] |
https://paperswithcode.com/paper/deep-neural-networks-for-swept-volume
|
1805.11597
| null | null |
Deep Neural Networks for Swept Volume Prediction Between Configurations
|
Swept Volume (SV), the volume displaced by an object when it is moving along
a trajectory, is considered a useful metric for motion planning. First, SV has
been used to identify collisions along a trajectory, because it directly
measures the amount of space required for an object to move. Second, in
sampling-based motion planning, SV is an ideal distance metric, because it
correlates to the likelihood of success of the expensive local planning step
between two sampled configurations. However, in both of these applications,
traditional SV algorithms are too computationally expensive for efficient
motion planning. In this work, we train Deep Neural Networks (DNNs) to learn
the size of SV for specific robot geometries. Results for two robots, a 6
degree of freedom (DOF) rigid body and a 7 DOF fixed-based manipulator,
indicate that the network estimations are very close to the true size of SV and
is more than 1500 times faster than a state of the art SV estimation algorithm.
| null |
http://arxiv.org/abs/1805.11597v1
|
http://arxiv.org/pdf/1805.11597v1.pdf
| null |
[
"Hao-Tien Lewis Chiang",
"Aleksandra Faust",
"Lydia Tapia"
] |
[
"Motion Planning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-noise-attacks-of-deep-learning
|
1805.11596
| null | null |
Adversarial Noise Attacks of Deep Learning Architectures -- Stability Analysis via Sparse Modeled Signals
|
Despite their impressive performance, deep convolutional neural networks (CNNs) have been shown to be sensitive to small adversarial perturbations. These nuisances, which one can barely notice, are powerful enough to fool sophisticated and well performing classifiers, leading to ridiculous misclassification results. In this paper we analyze the stability of state-of-the-art deep-learning classification machines to adversarial perturbations, where we assume that the signals belong to the (possibly multi-layer) sparse representation model. We start with convolutional sparsity and then proceed to its multi-layered version, which is tightly connected to CNNs. Our analysis links between the stability of the classification to noise and the underlying structure of the signal, quantified by the sparsity of its representation under a fixed dictionary. In addition, we offer similar stability theorems for two practical pursuit algorithms, which are posed as two different deep-learning architectures - the layered Thresholding and the layered Basis Pursuit. Our analysis establishes the better robustness of the later to adversarial attacks. We corroborate these theoretical results by numerical experiments on three datasets: MNIST, CIFAR-10 and CIFAR-100.
| null |
https://arxiv.org/abs/1805.11596v3
|
https://arxiv.org/pdf/1805.11596v3.pdf
| null |
[
"Yaniv Romano",
"Aviad Aberdam",
"Jeremias Sulam",
"Michael Elad"
] |
[
"General Classification"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/observe-and-look-further-achieving-consistent
|
1805.11593
| null | null |
Observe and Look Further: Achieving Consistent Performance on Atari
|
Despite significant advances in the field of deep Reinforcement Learning
(RL), today's algorithms still fail to learn human-level policies consistently
over a set of diverse tasks such as Atari 2600 games. We identify three key
challenges that any algorithm needs to master in order to perform well on all
games: processing diverse reward distributions, reasoning over long time
horizons, and exploring efficiently. In this paper, we propose an algorithm
that addresses each of these challenges and is able to learn human-level
policies on nearly all Atari games. A new transformed Bellman operator allows
our algorithm to process rewards of varying densities and scales; an auxiliary
temporal consistency loss allows us to train stably using a discount factor of
$\gamma = 0.999$ (instead of $\gamma = 0.99$) extending the effective planning
horizon by an order of magnitude; and we ease the exploration problem by using
human demonstrations that guide the agent towards rewarding states. When tested
on a set of 42 Atari games, our algorithm exceeds the performance of an average
human on 40 games using a common set of hyper parameters. Furthermore, it is
the first deep RL algorithm to solve the first level of Montezuma's Revenge.
|
Despite significant advances in the field of deep Reinforcement Learning (RL), today's algorithms still fail to learn human-level policies consistently over a set of diverse tasks such as Atari 2600 games.
|
http://arxiv.org/abs/1805.11593v1
|
http://arxiv.org/pdf/1805.11593v1.pdf
| null |
[
"Tobias Pohlen",
"Bilal Piot",
"Todd Hester",
"Mohammad Gheshlaghi Azar",
"Dan Horgan",
"David Budden",
"Gabriel Barth-Maron",
"Hado van Hasselt",
"John Quan",
"Mel Večerík",
"Matteo Hessel",
"Rémi Munos",
"Olivier Pietquin"
] |
[
"Atari Games",
"Deep Reinforcement Learning",
"Montezuma's Revenge",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/playing-hard-exploration-games-by-watching
|
1805.11592
| null | null |
Playing hard exploration games by watching YouTube
|
Deep reinforcement learning methods traditionally struggle with tasks where
environment rewards are particularly sparse. One successful method of guiding
exploration in these domains is to imitate trajectories provided by a human
demonstrator. However, these demonstrations are typically collected under
artificial conditions, i.e. with access to the agent's exact environment setup
and the demonstrator's action and reward trajectories. Here we propose a
two-stage method that overcomes these limitations by relying on noisy,
unaligned footage without access to such data. First, we learn to map unaligned
videos from multiple sources to a common representation using self-supervised
objectives constructed over both time and modality (i.e. vision and sound).
Second, we embed a single YouTube video in this representation to construct a
reward function that encourages an agent to imitate human gameplay. This method
of one-shot imitation allows our agent to convincingly exceed human-level
performance on the infamously hard exploration games Montezuma's Revenge,
Pitfall! and Private Eye for the first time, even if the agent is not presented
with any environment rewards.
|
One successful method of guiding exploration in these domains is to imitate trajectories provided by a human demonstrator.
|
http://arxiv.org/abs/1805.11592v2
|
http://arxiv.org/pdf/1805.11592v2.pdf
|
NeurIPS 2018 12
|
[
"Yusuf Aytar",
"Tobias Pfaff",
"David Budden",
"Tom Le Paine",
"Ziyu Wang",
"Nando de Freitas"
] |
[
"Deep Reinforcement Learning",
"Montezuma's Revenge",
"Reinforcement Learning"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/7557-playing-hard-exploration-games-by-watching-youtube
|
http://papers.nips.cc/paper/7557-playing-hard-exploration-games-by-watching-youtube.pdf
|
playing-hard-exploration-games-by-watching-1
| null |
[] |
https://paperswithcode.com/paper/mirror-mirror-on-the-wall-whos-got-the
|
1805.11589
| null | null |
Mirror, Mirror, on the Wall, Who's Got the Clearest Image of Them All? - A Tailored Approach to Single Image Reflection Removal
|
Removing reflection artefacts from a single image is a problem of both
theoretical and practical interest, which still presents challenges because of
the massively ill-posed nature of the problem. In this work, we propose a
technique based on a novel optimisation problem. Firstly, we introduce a simple
user interaction scheme, which helps minimise information loss in
reflection-free regions. Secondly, we introduce an $H^2$ fidelity term, which
preserves fine detail while enforcing global colour similarity. We show that
this combination allows us to mitigate some major drawbacks of the existing
methods for reflection removal. We demonstrate, through numerical and visual
experiments, that our method is able to outperform the state-of-the-art methods
and compete with recent deep-learning approaches.
| null |
http://arxiv.org/abs/1805.11589v2
|
http://arxiv.org/pdf/1805.11589v2.pdf
| null |
[
"Daniel Heydecker",
"Georg Maierhofer",
"Angelica I. Aviles-Rivero",
"Qingnan Fan",
"Dong-Dong Chen",
"Carola-Bibiane Schönlieb",
"Sabine Süsstrunk"
] |
[
"All",
"Reflection Removal"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/focal-onset-seizure-prediction-using
|
1805.11576
| null | null |
Focal onset seizure prediction using convolutional networks
|
Objective: This work investigates the hypothesis that focal seizures can be
predicted using scalp electroencephalogram (EEG) data. Our first aim is to
learn features that distinguish between the interictal and preictal regions.
The second aim is to define a prediction horizon in which the prediction is as
accurate and as early as possible, clearly two competing objectives. Methods:
Convolutional filters on the wavelet transformation of the EEG signal are used
to define and learn quantitative signatures for each period: interictal,
preictal, and ictal. The optimal seizure prediction horizon is also learned
from the data as opposed to making an a priori assumption. Results:
Computational solutions to the optimization problem indicate a ten-minute
seizure prediction horizon. This result is verified by measuring
Kullback-Leibler divergence on the distributions of the automatically extracted
features. Conclusion: The results on the EEG database of 204 recordings
demonstrate that (i) the preictal phase transition occurs approximately ten
minutes before seizure onset, and (ii) the prediction results on the test set
are promising, with a sensitivity of 87.8% and a low false prediction rate of
0.142 FP/h. Our results significantly outperform a random predictor and other
seizure prediction algorithms. Significance: We demonstrate that a robust set
of features can be learned from scalp EEG that characterize the preictal state
of focal seizures.
| null |
http://arxiv.org/abs/1805.11576v1
|
http://arxiv.org/pdf/1805.11576v1.pdf
| null |
[
"Haidar Khan",
"Lara Marcuse",
"Madeline Fields",
"Kalina Swann",
"Bülent Yener"
] |
[
"EEG",
"Electroencephalogram (EEG)",
"Prediction",
"Seizure prediction"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-regularizers-in-inverse-problems
|
1805.11572
| null | null |
Adversarial Regularizers in Inverse Problems
|
Inverse Problems in medical imaging and computer vision are traditionally
solved using purely model-based methods. Among those variational regularization
models are one of the most popular approaches. We propose a new framework for
applying data-driven approaches to inverse problems, using a neural network as
a regularization functional. The network learns to discriminate between the
distribution of ground truth images and the distribution of unregularized
reconstructions. Once trained, the network is applied to the inverse problem by
solving the corresponding variational problem. Unlike other data-based
approaches for inverse problems, the algorithm can be applied even if only
unsupervised training data is available. Experiments demonstrate the potential
of the framework for denoising on the BSDS dataset and for computed tomography
reconstruction on the LIDC dataset.
|
Inverse Problems in medical imaging and computer vision are traditionally solved using purely model-based methods.
|
http://arxiv.org/abs/1805.11572v2
|
http://arxiv.org/pdf/1805.11572v2.pdf
|
NeurIPS 2018 12
|
[
"Sebastian Lunz",
"Ozan Öktem",
"Carola-Bibiane Schönlieb"
] |
[
"Denoising"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8070-adversarial-regularizers-in-inverse-problems
|
http://papers.nips.cc/paper/8070-adversarial-regularizers-in-inverse-problems.pdf
|
adversarial-regularizers-in-inverse-problems-1
| null |
[] |
https://paperswithcode.com/paper/multi-view-semantic-labeling-of-3d-point
|
1805.03994
| null | null |
Multi-View Semantic Labeling of 3D Point Clouds for Automated Plant Phenotyping
|
Semantic labeling of 3D point clouds is important for the derivation of 3D
models from real world scenarios in several economic fields such as building
industry, facility management, town planning or heritage conservation. In
contrast to these most common applications, we describe in this study the
semantic labeling of 3D point clouds derived from plant organs by
high-precision scanning. Our approach is optimized for the task of plant
phenotyping with its very specific challenges and is employing a deep learning
framework. Thereby, we report important experiences concerning detailed
parameter initialization and optimization techniques. By evaluating our
approach with challenging datasets we achieve state-of-the-art results without
difficult and time consuming feature engineering as being necessary in
traditional approaches to semantic labeling.
| null |
http://arxiv.org/abs/1805.03994v2
|
http://arxiv.org/pdf/1805.03994v2.pdf
| null |
[
"Bernhard Japes",
"Jennifer Mack",
"Florian Rist",
"Katja Herzog",
"Reinhard Töpfer",
"Volker Steinhage"
] |
[
"Feature Engineering",
"Management",
"Plant Phenotyping"
] | 2018-05-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/human-in-the-loop-interpretability-prior
|
1805.11571
| null | null |
Human-in-the-Loop Interpretability Prior
|
We often desire our models to be interpretable as well as accurate. Prior
work on optimizing models for interpretability has relied on easy-to-quantify
proxies for interpretability, such as sparsity or the number of operations
required. In this work, we optimize for interpretability by directly including
humans in the optimization loop. We develop an algorithm that minimizes the
number of user studies to find models that are both predictive and
interpretable and demonstrate our approach on several data sets. Our human
subjects results show trends towards different proxy notions of
interpretability on different datasets, which suggests that different proxies
are preferred on different tasks.
| null |
http://arxiv.org/abs/1805.11571v2
|
http://arxiv.org/pdf/1805.11571v2.pdf
|
NeurIPS 2018 12
|
[
"Isaac Lage",
"Andrew Slavin Ross",
"Been Kim",
"Samuel J. Gershman",
"Finale Doshi-Velez"
] |
[] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/8219-human-in-the-loop-interpretability-prior
|
http://papers.nips.cc/paper/8219-human-in-the-loop-interpretability-prior.pdf
|
human-in-the-loop-interpretability-prior-1
| null |
[
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Interpretability",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Interpretability",
"source_title": "CAM: Causal additive models, high-dimensional order search and penalized regression",
"source_url": "http://arxiv.org/abs/1310.1533v2"
}
] |
https://paperswithcode.com/paper/on-gradient-regularizers-for-mmd-gans
|
1805.11565
| null | null |
On gradient regularizers for MMD GANs
|
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD). We show that controlling the gradient of the critic is vital to having a sensible loss function, and devise a method to enforce exact, analytical gradient constraints at no additional cost compared to existing approximate techniques based on additive regularizers. The new loss function is provably continuous, and experiments show that it stabilizes and accelerates training, giving image generation models that outperform state-of-the art methods on $160 \times 160$ CelebA and $64 \times 64$ unconditional ImageNet.
|
We propose a principled method for gradient-based regularization of the critic of GAN-like models trained by adversarially optimizing the kernel of a Maximum Mean Discrepancy (MMD).
|
https://arxiv.org/abs/1805.11565v5
|
https://arxiv.org/pdf/1805.11565v5.pdf
|
NeurIPS 2018 12
|
[
"Michael Arbel",
"Danica J. Sutherland",
"Mikołaj Bińkowski",
"Arthur Gretton"
] |
[
"Image Generation"
] | 2018-05-29T00:00:00 |
http://papers.nips.cc/paper/7904-on-gradient-regularizers-for-mmd-gans
|
http://papers.nips.cc/paper/7904-on-gradient-regularizers-for-mmd-gans.pdf
|
on-gradient-regularizers-for-mmd-gans-1
| null |
[] |
https://paperswithcode.com/paper/entrainment-profiles-comparison-by-gender
|
1805.11564
| null | null |
Entrainment profiles: Comparison by gender, role, and feature set
|
We examine prosodic entrainment in cooperative game dialogs for new feature
sets describing register, pitch accent shape, and rhythmic aspects of
utterances. For these as well as for established features we present
entrainment profiles to detect within- and across-dialog entrainment by the
speakers' gender and role in the game. It turned out, that feature sets undergo
entrainment in different quantitative and qualitative ways, which can partly be
attributed to their different functions. Furthermore, interactions between
speaker gender and role (describer vs. follower) suggest gender-dependent
strategies in cooperative solution-oriented interactions: female describers
entrain most, male describers least. Our data suggests a slight advantage of
the latter strategy on task success.
| null |
http://arxiv.org/abs/1805.11564v1
|
http://arxiv.org/pdf/1805.11564v1.pdf
| null |
[
"Uwe D. Reichel",
"Štefan Beňuš",
"Katalin Mády"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/structured-disentangled-representations
|
1804.02086
| null | null |
Structured Disentangled Representations
|
Deep latent-variable models learn representations of high-dimensional data in
an unsupervised manner. A number of recent efforts have focused on learning
representations that disentangle statistically independent axes of variation by
introducing modifications to the standard objective function. These approaches
generally assume a simple diagonal Gaussian prior and as a result are not able
to reliably disentangle discrete factors of variation. We propose a two-level
hierarchical objective to control relative degree of statistical independence
between blocks of variables and individual variables within blocks. We derive
this objective as a generalization of the evidence lower bound, which allows us
to explicitly represent the trade-offs between mutual information between data
and representation, KL divergence between representation and prior, and
coverage of the support of the empirical data distribution. Experiments on a
variety of datasets demonstrate that our objective can not only disentangle
discrete variables, but that doing so also improves disentanglement of other
variables and, importantly, generalization even to unseen combinations of
factors.
| null |
http://arxiv.org/abs/1804.02086v4
|
http://arxiv.org/pdf/1804.02086v4.pdf
| null |
[
"Babak Esmaeili",
"Hao Wu",
"Sarthak Jain",
"Alican Bozkurt",
"N. Siddharth",
"Brooks Paige",
"Dana H. Brooks",
"Jennifer Dy",
"Jan-Willem van de Meent"
] |
[
"Disentanglement"
] | 2018-04-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimisation-and-illumination-of-a-real-world
|
1805.11555
| null | null |
Optimisation and Illumination of a Real-world Workforce Scheduling and Routing Application via Map-Elites
|
Workforce Scheduling and Routing Problems (WSRP) are very common in many
practical domains, and usually, have a number of objectives. Illumination
algorithms such as Map-Elites (ME) have recently gained traction in application
to {\em design} problems, in providing multiple diverse solutions as well as
illuminating the solution space in terms of user-defined characteristics, but
typically require significant computational effort to produce the solution
archive. We investigate whether ME can provide an effective approach to solving
WSRP, a {\em repetitive} problem in which solutions have to be produced quickly
and often. The goals of the paper are two-fold. The first is to evaluate
whether ME can provide solutions of competitive quality to an Evolutionary
Algorithm (EA) in terms of a single objective function, and the second to
examine its ability to provide a repertoire of solutions that maximise user
choice. We find that very small computational budgets favour the EA in terms of
quality, but ME outperforms the EA at larger budgets, provides a more diverse
array of solutions, and lends insight to the end-user.
| null |
http://arxiv.org/abs/1805.11555v1
|
http://arxiv.org/pdf/1805.11555v1.pdf
| null |
[
"Neil Urquhart",
"Emma Hart"
] |
[
"Scheduling"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visually-grounded-situated-learning-in-neural
|
1805.11546
| null | null |
Like a Baby: Visually Situated Neural Language Acquisition
|
We examine the benefits of visual context in training neural language models to perform next-word prediction. A multi-modal neural architecture is introduced that outperform its equivalent trained on language alone with a 2\% decrease in perplexity, even when no visual context is available at test. Fine-tuning the embeddings of a pre-trained state-of-the-art bidirectional language model (BERT) in the language modeling framework yields a 3.5\% improvement. The advantage for training with visual context when testing without is robust across different languages (English, German and Spanish) and different models (GRU, LSTM, $\Delta$-RNN, as well as those that use BERT embeddings). Thus, language models perform better when they learn like a baby, i.e, in a multi-modal environment. This finding is compatible with the theory of situated cognition: language is inseparable from its physical context.
| null |
https://arxiv.org/abs/1805.11546v2
|
https://arxiv.org/pdf/1805.11546v2.pdf
|
ACL 2019 7
|
[
"Alexander G. Ororbia",
"Ankur Mali",
"Matthew A. Kelly",
"David Reitter"
] |
[
"Language Acquisition",
"Language Modeling",
"Language Modelling"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/P19-1506
|
https://aclanthology.org/P19-1506.pdf
|
like-a-baby-visually-situated-neural-language
| null |
[
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L277",
"description": "**Sigmoid Activations** are a type of activation function for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{1}{\\left(1+\\exp\\left(-x\\right)\\right)}$$\r\n\r\nSome drawbacks of this activation that have been noted in the literature are: sharp damp gradients during backpropagation from deeper hidden layers to inputs, gradient saturation, and slow convergence.",
"full_name": "Sigmoid Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Sigmoid Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/96aaa311c0251d24decb9dc5da4957b7c590af6f/torch/nn/modules/activation.py#L329",
"description": "**Tanh Activation** is an activation function used for neural networks:\r\n\r\n$$f\\left(x\\right) = \\frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}$$\r\n\r\nHistorically, the tanh function became preferred over the [sigmoid function](https://paperswithcode.com/method/sigmoid-activation) as it gave better performance for multi-layer neural networks. But it did not solve the vanishing gradient problem that sigmoids suffered, which was tackled more effectively with the introduction of [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nImage Source: [Junxi Feng](https://www.researchgate.net/profile/Junxi_Feng)",
"full_name": "Tanh Activation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Tanh Activation",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)",
"full_name": "Long Short-Term Memory",
"introduced_year": 1997,
"main_collection": {
"area": "Sequential",
"description": "",
"name": "Recurrent Neural Networks",
"parent": null
},
"name": "LSTM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/lightly-supervised-representation-learning
|
1805.11545
| null | null |
Lightly-supervised Representation Learning with Global Interpretability
|
We propose a lightly-supervised approach for information extraction, in
particular named entity classification, which combines the benefits of
traditional bootstrapping, i.e., use of limited annotations and
interpretability of extraction patterns, with the robust learning approaches
proposed in representation learning. Our algorithm iteratively learns custom
embeddings for both the multi-word entities to be extracted and the patterns
that match them from a few example entities per category. We demonstrate that
this representation-based approach outperforms three other state-of-the-art
bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes.
Additionally, using these embeddings, our approach outputs a
globally-interpretable model consisting of a decision list, by ranking patterns
based on their proximity to the average entity embedding in a given class. We
show that this interpretable model performs close to our complete bootstrapping
model, proving that representation learning can be used to produce
interpretable models with small loss in performance.
| null |
http://arxiv.org/abs/1805.11545v1
|
http://arxiv.org/pdf/1805.11545v1.pdf
|
WS 2019 6
|
[
"Marco A. Valenzuela-Escárcega",
"Ajay Nagesh",
"Mihai Surdeanu"
] |
[
"Representation Learning"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/W19-1504
|
https://aclanthology.org/W19-1504.pdf
|
lightly-supervised-representation-learning-1
| null |
[] |
https://paperswithcode.com/paper/faithful-inversion-of-generative-models-for
|
1712.00287
| null | null |
Faithful Inversion of Generative Models for Effective Amortized Inference
|
Inference amortization methods share information across multiple
posterior-inference problems, allowing each to be carried out more efficiently.
Generally, they require the inversion of the dependency structure in the
generative model, as the modeller must learn a mapping from observations to
distributions approximating the posterior. Previous approaches have involved
inverting the dependency structure in a heuristic way that fails to capture
these dependencies correctly, thereby limiting the achievable accuracy of the
resulting approximations. We introduce an algorithm for faithfully, and
minimally, inverting the graphical model structure of any generative model.
Such inverses have two crucial properties: (a) they do not encode any
independence assertions that are absent from the model and; (b) they are local
maxima for the number of true independencies encoded. We prove the correctness
of our approach and empirically show that the resulting minimally faithful
inverses lead to better inference amortization than existing heuristic
approaches.
|
Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently.
|
http://arxiv.org/abs/1712.00287v5
|
http://arxiv.org/pdf/1712.00287v5.pdf
|
NeurIPS 2018 12
|
[
"Stefan Webb",
"Adam Golinski",
"Robert Zinkov",
"N. Siddharth",
"Tom Rainforth",
"Yee Whye Teh",
"Frank Wood"
] |
[] | 2017-12-01T00:00:00 |
http://papers.nips.cc/paper/7570-faithful-inversion-of-generative-models-for-effective-amortized-inference
|
http://papers.nips.cc/paper/7570-faithful-inversion-of-generative-models-for-effective-amortized-inference.pdf
|
faithful-inversion-of-generative-models-for-1
| null |
[] |
https://paperswithcode.com/paper/forward-amortized-inference-for-likelihood
|
1805.11542
| null | null |
Forward Amortized Inference for Likelihood-Free Variational Marginalization
|
In this paper, we introduce a new form of amortized variational inference by
using the forward KL divergence in a joint-contrastive variational loss. The
resulting forward amortized variational inference is a likelihood-free method
as its gradient can be sampled without bias and without requiring any
evaluation of either the model joint distribution or its derivatives. We prove
that our new variational loss is optimized by the exact posterior marginals in
the fully factorized mean-field approximation, a property that is not shared
with the more conventional reverse KL inference. Furthermore, we show that
forward amortized inference can be easily marginalized over large families of
latent variables in order to obtain a marginalized variational posterior. We
consider two examples of variational marginalization. In our first example we
train a Bayesian forecaster for predicting a simplified chaotic model of
atmospheric convection. In the second example we train an amortized variational
approximation of a Bayesian optimal classifier by marginalizing over the model
space. The result is a powerful meta-classification network that can solve
arbitrary classification problems without further training.
| null |
http://arxiv.org/abs/1805.11542v1
|
http://arxiv.org/pdf/1805.11542v1.pdf
| null |
[
"Luca Ambrogioni",
"Umut Güçlü",
"Julia Berezutskaya",
"Eva W. P. van den Borne",
"Yağmur Güçlütürk",
"Max Hinne",
"Eric Maris",
"Marcel A. J. van Gerven"
] |
[
"General Classification",
"Variational Inference"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/variance-reduced-stochastic-learning-by
|
1708.01384
| null | null |
Variance-Reduced Stochastic Learning by Networked Agents under Random Reshuffling
|
A new amortized variance-reduced gradient (AVRG) algorithm was developed in
\cite{ying2017convergence}, which has constant storage requirement in
comparison to SAGA and balanced gradient computations in comparison to SVRG.
One key advantage of the AVRG strategy is its amenability to decentralized
implementations. In this work, we show how AVRG can be extended to the network
case where multiple learning agents are assumed to be connected by a graph
topology. In this scenario, each agent observes data that is spatially
distributed and all agents are only allowed to communicate with direct
neighbors. Moreover, the amount of data observed by the individual agents may
differ drastically. For such situations, the balanced gradient computation
property of AVRG becomes a real advantage in reducing idle time caused by
unbalanced local data storage requirements, which is characteristic of other
reduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is
shown to have linear convergence to the exact solution, and is much more memory
efficient than other alternative algorithms. In addition, we propose a
mini-batch strategy to balance the communication and computation efficiency for
diffusion-AVRG. When a proper batch size is employed, it is observed in
simulations that diffusion-AVRG is more computationally efficient than exact
diffusion or EXTRA while maintaining almost the same communication efficiency.
| null |
http://arxiv.org/abs/1708.01384v3
|
http://arxiv.org/pdf/1708.01384v3.pdf
| null |
[
"Kun Yuan",
"Bicheng Ying",
"Jiageng Liu",
"Ali H. Sayed"
] |
[] | 2017-08-04T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "SAGA is a method in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem.",
"full_name": "SAGA",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Optimization",
"parent": null
},
"name": "SAGA",
"source_title": "SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives",
"source_url": "http://arxiv.org/abs/1407.0202v3"
}
] |
https://paperswithcode.com/paper/combinets-creativity-via-recombination-of
|
1802.03605
| null | null |
Combinets: Creativity via Recombination of Neural Networks
|
One of the defining characteristics of human creativity is the ability to
make conceptual leaps, creating something surprising from typical knowledge. In
comparison, deep neural networks often struggle to handle cases outside of
their training data, which is especially problematic for problems with limited
training data. Approaches exist to transfer knowledge from problems with
sufficient data to those with insufficient data, but they tend to require
additional training or a domain-specific method of transfer. We present a new
approach, conceptual expansion, that serves as a general representation for
reusing existing trained models to derive new models without backpropagation.
We evaluate our approach on few-shot variations of two tasks: image
classification and image generation, and outperform standard transfer learning
approaches.
| null |
http://arxiv.org/abs/1802.03605v4
|
http://arxiv.org/pdf/1802.03605v4.pdf
| null |
[
"Matthew Guzdial",
"Mark O. Riedl"
] |
[
"General Classification",
"image-classification",
"Image Classification",
"Image Generation",
"Transfer Learning"
] | 2018-02-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/couplenet-paying-attention-to-couples-with
|
1805.11535
| null | null |
CoupleNet: Paying Attention to Couples with Coupled Attention for Relationship Recommendation
|
Dating and romantic relationships not only play a huge role in our personal
lives but also collectively influence and shape society. Today, many romantic
partnerships originate from the Internet, signifying the importance of
technology and the web in modern dating. In this paper, we present a text-based
computational approach for estimating the relationship compatibility of two
users on social media. Unlike many previous works that propose reciprocal
recommender systems for online dating websites, we devise a distant supervision
heuristic to obtain real world couples from social platforms such as Twitter.
Our approach, the CoupleNet is an end-to-end deep learning based estimator that
analyzes the social profiles of two users and subsequently performs a
similarity match between the users. Intuitively, our approach performs both
user profiling and match-making within a unified end-to-end framework.
CoupleNet utilizes hierarchical recurrent neural models for learning
representations of user profiles and subsequently coupled attention mechanisms
to fuse information aggregated from two users. To the best of our knowledge,
our approach is the first data-driven deep learning approach for our novel
relationship recommendation problem. We benchmark our CoupleNet against several
machine learning and deep learning baselines. Experimental results show that
our approach outperforms all approaches significantly in terms of precision.
Qualitative analysis shows that our model is capable of also producing
explainable results to users.
| null |
http://arxiv.org/abs/1805.11535v1
|
http://arxiv.org/pdf/1805.11535v1.pdf
| null |
[
"Yi Tay",
"Anh Tuan Luu",
"Siu Cheung Hui"
] |
[
"Deep Learning",
"Recommendation Systems"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/airpred-a-flexible-r-package-implementing
|
1805.11534
| null | null |
airpred: A Flexible R Package Implementing Methods for Predicting Air Pollution
|
Fine particulate matter (PM$_{2.5}$) is one of the criteria air pollutants
regulated by the Environmental Protection Agency in the United States. There is
strong evidence that ambient exposure to (PM$_{2.5}$) increases risk of
mortality and hospitalization. Large scale epidemiological studies on the
health effects of PM$_{2.5}$ provide the necessary evidence base for lowering
the safety standards and inform regulatory policy. However, ambient monitors of
PM$_{2.5}$ (as well as monitors for other pollutants) are sparsely located
across the U.S., and therefore studies based only on the levels of PM$_{2.5}$
measured from the monitors would inevitably exclude large amounts of the
population. One approach to resolving this issue has been developing models to
predict local PM$_{2.5}$, NO$_2$, and ozone based on satellite, meteorological,
and land use data. This process typically relies developing a prediction model
that relies on large amounts of input data and is highly computationally
intensive to predict levels of air pollution in unmonitored areas. We have
developed a flexible R package that allows for environmental health researchers
to design and train spatio-temporal models capable of predicting multiple
pollutants, including PM$_{2.5}$. We utilize H2O, an open source big data
platform, to achieve both performance and scalability when used in conjunction
with cloud or cluster computing systems.
| null |
http://arxiv.org/abs/1805.11534v2
|
http://arxiv.org/pdf/1805.11534v2.pdf
| null |
[
"M. Benjamin Sabath",
"Qian Di",
"Danielle Braun",
"Joel Schwarz",
"Francesca Dominici",
"Christine Choirat"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/joint-spatial-angular-sparse-coding-for-dmri
|
1612.05846
| null | null |
Joint Spatial-Angular Sparse Coding for dMRI with Separable Dictionaries
|
Diffusion MRI (dMRI) provides the ability to reconstruct neuronal fibers in
the brain, $\textit{in vivo}$, by measuring water diffusion along angular
gradient directions in q-space. High angular resolution diffusion imaging
(HARDI) can produce better estimates of fiber orientation than the popularly
used diffusion tensor imaging, but the high number of samples needed to
estimate diffusivity requires longer patient scan times. To accelerate dMRI,
compressed sensing (CS) has been utilized by exploiting a sparse dictionary
representation of the data, discovered through sparse coding. The sparser the
representation, the fewer samples are needed to reconstruct a high resolution
signal with limited information loss, and so an important area of research has
focused on finding the sparsest possible representation of dMRI. Current
reconstruction methods however, rely on an angular representation $\textit{per
voxel}$ with added spatial regularization, and so, for non-zero signals, one is
required to have at least one non-zero coefficient per voxel. This means that
the global level of sparsity must be greater than the number of voxels. In
contrast, we propose a joint spatial-angular representation of dMRI that will
allow us to achieve levels of global sparsity that are below the number of
voxels. A major challenge, however, is the computational complexity of solving
a global sparse coding problem over large-scale dMRI. In this work, we present
novel adaptations of popular sparse coding algorithms that become better suited
for solving large-scale problems by exploiting spatial-angular separability.
Our experiments show that our method achieves significantly sparser
representations of HARDI than is possible by the state of the art.
| null |
http://arxiv.org/abs/1612.05846v3
|
http://arxiv.org/pdf/1612.05846v3.pdf
| null |
[
"Evan Schwab",
"René Vidal",
"Nicolas Charon"
] |
[
"compressed sensing",
"Diffusion MRI"
] | 2016-12-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/low-resolution-face-recognition-in-the-wild
|
1805.11529
| null | null |
On Low-Resolution Face Recognition in the Wild: Comparisons and New Techniques
|
Although face recognition systems have achieved impressive performance in
recent years, the low-resolution face recognition (LRFR) task remains
challenging, especially when the LR faces are captured under non-ideal
conditions, as is common in surveillance-based applications. Faces captured in
such conditions are often contaminated by blur, nonuniform lighting, and
nonfrontal face pose. In this paper, we analyze face recognition techniques
using data captured under low-quality conditions in the wild. We provide a
comprehensive analysis of experimental results for two of the most important
applications in real surveillance applications, and demonstrate practical
approaches to handle both cases that show promising performance. The following
three contributions are made: {\em (i)} we conduct experiments to evaluate
super-resolution methods for low-resolution face recognition; {\em (ii)} we
study face re-identification on various public face datasets including real
surveillance and low-resolution subsets of large-scale datasets, present a
baseline result for several deep learning based approaches, and improve them by
introducing a GAN pre-training approach and fully convolutional architecture;
and {\em (iii)} we explore low-resolution face identification by employing a
state-of-the-art supervised discriminative learning approach. Evaluations are
conducted on challenging portions of the SCFace and UCCSface datasets.
| null |
http://arxiv.org/abs/1805.11529v2
|
http://arxiv.org/pdf/1805.11529v2.pdf
| null |
[
"Pei Li",
"Loreto Prieto",
"Domingo Mery",
"Patrick Flynn"
] |
[
"Face Identification",
"Face Recognition",
"Super-Resolution"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/learning-to-transcribe-by-ear
|
1805.11526
| null | null |
Learning to Transcribe by Ear
|
Rethinking how to model polyphonic transcription formally, we frame it as a
reinforcement learning task. Such a task formulation encompasses the notion of
a musical agent and an environment containing an instrument as well as the
sound source to be transcribed. Within this conceptual framework, the
transcription process can be described as the agent interacting with the
instrument in the environment, and obtaining reward by playing along with what
it hears. Choosing from a discrete set of actions - the notes to play on its
instrument - the amount of reward the agent experiences depends on which notes
it plays and when. This process resembles how a human musician might approach
the task of transcription, and the satisfaction she achieves by closely
mimicking the sound source to transcribe on her instrument. Following a
discussion of the theoretical framework and the benefits of modelling the
problem in this way, we focus our attention on several practical considerations
and address the difficulties in training an agent to acceptable performance on
a set of tasks with increasing difficulty. We demonstrate promising results in
partially constrained environments.
| null |
http://arxiv.org/abs/1805.11526v1
|
http://arxiv.org/pdf/1805.11526v1.pdf
| null |
[
"Rainer Kelz",
"Gerhard Widmer"
] |
[
"Reinforcement Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/face-recognition-in-low-quality-images-a
|
1805.11519
| null | null |
Face Recognition in Low Quality Images: A Survey
|
Low-resolution face recognition (LRFR) has received increasing attention over
the past few years. Its applications lie widely in the real-world environment
when high-resolution or high-quality images are hard to capture. One of the
biggest demands for LRFR technologies is video surveillance. As the the number
of surveillance cameras in the city increases, the videos that captured will
need to be processed automatically. However, those videos or images are usually
captured with large standoffs, arbitrary illumination condition, and diverse
angles of view. Faces in these images are generally small in size. Several
studies addressed this problem employed techniques like super resolution,
deblurring, or learning a relationship between different resolution domains. In
this paper, we provide a comprehensive review of approaches to low-resolution
face recognition in the past five years. First, a general problem definition is
given. Later, systematically analysis of the works on this topic is presented
by catogory. In addition to describing the methods, we also focus on datasets
and experiment settings. We further address the related works on unconstrained
low-resolution face recognition and compare them with the result that use
synthetic low-resolution data. Finally, we summarized the general limitations
and speculate a priorities for the future effort.
| null |
http://arxiv.org/abs/1805.11519v3
|
http://arxiv.org/pdf/1805.11519v3.pdf
| null |
[
"Pei Li",
"Loreto Prieto",
"Domingo Mery",
"Patrick Flynn"
] |
[
"Deblurring",
"Face Recognition",
"Super-Resolution",
"Survey"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-reward-reinforced-summarization-with
|
1804.06451
| null | null |
Multi-Reward Reinforced Summarization with Saliency and Entailment
|
Abstractive text summarization is the task of compressing and rewriting a
long document into a short summary while maintaining saliency, directed logical
entailment, and non-redundancy. In this work, we address these three important
aspects of a good summary via a reinforcement learning approach with two novel
reward functions: ROUGESal and Entail, on top of a coverage-based baseline. The
ROUGESal reward modifies the ROUGE metric by up-weighting the salient
phrases/words detected via a keyphrase classifier. The Entail reward gives high
(length-normalized) scores to logically-entailed summaries using an entailment
classifier. Further, we show superior performance improvement when these
rewards are combined with traditional metric (ROUGE) based rewards, via our
novel and effective multi-reward approach of optimizing multiple rewards
simultaneously in alternate mini-batches. Our method achieves the new
state-of-the-art results (including human evaluation) on the CNN/Daily Mail
dataset as well as strong improvements in a test-only transfer setup on
DUC-2002.
| null |
http://arxiv.org/abs/1804.06451v2
|
http://arxiv.org/pdf/1804.06451v2.pdf
|
NAACL 2018 6
|
[
"Ramakanth Pasunuru",
"Mohit Bansal"
] |
[
"Abstractive Text Summarization",
"Reinforcement Learning",
"Text Summarization"
] | 2018-04-17T00:00:00 |
https://aclanthology.org/N18-2102
|
https://aclanthology.org/N18-2102.pdf
|
multi-reward-reinforced-summarization-with-1
| null |
[] |
https://paperswithcode.com/paper/hyperparameter-importance-across-datasets
|
1710.04725
| null | null |
Hyperparameter Importance Across Datasets
|
With the advent of automated machine learning, automated hyperparameter
optimization methods are by now routinely used in data mining. However, this
progress is not yet matched by equal progress on automatic analyses that yield
information beyond performance-optimizing hyperparameter settings. In this
work, we aim to answer the following two questions: Given an algorithm, what
are generally its most important hyperparameters, and what are typically good
values for these? We present methodology and a framework to answer these
questions based on meta-learning across many datasets. We apply this
methodology using the experimental meta-data available on OpenML to determine
the most important hyperparameters of support vector machines, random forests
and Adaboost, and to infer priors for all their hyperparameters. The results,
obtained fully automatically, provide a quantitative basis to focus efforts in
both manual algorithm design and in automated hyperparameter optimization. The
conducted experiments confirm that the hyperparameters selected by the proposed
method are indeed the most important ones and that the obtained priors also
lead to statistically significant improvements in hyperparameter optimization.
|
With the advent of automated machine learning, automated hyperparameter optimization methods are by now routinely used in data mining.
|
http://arxiv.org/abs/1710.04725v2
|
http://arxiv.org/pdf/1710.04725v2.pdf
| null |
[
"J. N. van Rijn",
"F. Hutter"
] |
[
"Hyperparameter Optimization",
"Meta-Learning"
] | 2017-10-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/classification-with-imperfect-training-labels
|
1805.11505
| null | null |
Classification with imperfect training labels
|
We study the effect of imperfect training data labels on the performance of classification methods. In a general setting, where the probability that an observation in the training dataset is mislabelled may depend on both the feature vector and the true label, we bound the excess risk of an arbitrary classifier trained with imperfect labels in terms of its excess risk for predicting a noisy label. This reveals conditions under which a classifier trained with imperfect labels remains consistent for classifying uncorrupted test data points. Furthermore, under stronger conditions, we derive detailed asymptotic properties for the popular $k$-nearest neighbour ($k$nn), support vector machine (SVM) and linear discriminant analysis (LDA) classifiers. One consequence of these results is that the knn and SVM classifiers are robust to imperfect training labels, in the sense that the rate of convergence of the excess risks of these classifiers remains unchanged; in fact, our theoretical and empirical results even show that in some cases, imperfect labels may improve the performance of these methods. On the other hand, the LDA classifier is shown to be typically inconsistent in the presence of label noise unless the prior probabilities of each class are equal. Our theoretical results are supported by a simulation study.
| null |
https://arxiv.org/abs/1805.11505v3
|
https://arxiv.org/pdf/1805.11505v3.pdf
| null |
[
"Timothy I. Cannings",
"Yingying Fan",
"Richard J. Samworth"
] |
[
"Classification",
"General Classification"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear discriminant analysis** (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.\r\n\r\nExtracted from [Wikipedia](https://en.wikipedia.org/wiki/Linear_discriminant_analysis)\r\n\r\n**Source**:\r\n\r\nPaper: [Linear Discriminant Analysis: A Detailed Tutorial](https://dx.doi.org/10.3233/AIC-170729)\r\n\r\nPublic version: [Linear Discriminant Analysis: A Detailed Tutorial](https://usir.salford.ac.uk/id/eprint/52074/)",
"full_name": "Linear Discriminant Analysis",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.",
"name": "Dimensionality Reduction",
"parent": null
},
"name": "LDA",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/capturing-variabilities-from-computed
|
1805.11504
| null | null |
Capturing Variabilities from Computed Tomography Images with Generative Adversarial Networks
|
With the advent of Deep Learning (DL) techniques, especially Generative
Adversarial Networks (GANs), data augmentation and generation are quickly
evolving domains that have raised much interest recently. However, the DL
techniques are data demanding and since, medical data is not easily accessible,
they suffer from data insufficiency. To deal with this limitation, different
data augmentation techniques are used. Here, we propose a novel unsupervised
data-driven approach for data augmentation that can generate 2D Computed
Tomography (CT) images using a simple GAN. The generated CT images have good
global and local features of a real CT image and can be used to augment the
training datasets for effective learning. In this proof-of-concept study, we
show that our proposed solution using GANs is able to capture some of the
global and local CT variabilities. Our network is able to generate visually
realistic CT images and we aim to further enhance its output by scaling it to a
higher resolution and potentially from 2D to 3D.
| null |
http://arxiv.org/abs/1805.11504v1
|
http://arxiv.org/pdf/1805.11504v1.pdf
| null |
[
"Umair Javaid",
"John A. Lee"
] |
[
"Computed Tomography (CT)",
"Data Augmentation"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Dogecoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're trying to recover a lost Dogecoin wallet, knowing where to get help is essential. That’s why the Dogecoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Dogecoin Customer Support Number +1-833-534-1729\r\nDogecoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Dogecoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Dogecoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Dogecoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Dogecoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Dogecoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Dogecoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Dogecoin Deposit Not Received\r\nIf someone has sent you Dogecoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Dogecoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Dogecoin Transaction Stuck or Pending\r\nSometimes your Dogecoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Dogecoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Dogecoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Dogecoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Dogecoin tech.\r\n\r\n24/7 Availability: Dogecoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Dogecoin Support and Wallet Issues\r\nQ1: Can Dogecoin support help me recover stolen BTC?\r\nA: While Dogecoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Dogecoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Dogecoin’s official number (Dogecoin is decentralized), it connects you to trained professionals experienced in resolving all major Dogecoin issues.\r\n\r\nFinal Thoughts\r\nDogecoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Dogecoin transaction not confirmed, your Dogecoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Dogecoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Dogecoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Generative Models** aim to model data generatively (rather than discriminatively), that is they aim to approximate the probability distribution of the data. Below you can find a continuously updating list of generative models for computer vision.",
"name": "Generative Models",
"parent": null
},
"name": "Dogecoin Customer Service Number +1-833-534-1729",
"source_title": "Generative Adversarial Networks",
"source_url": "https://arxiv.org/abs/1406.2661v1"
}
] |
https://paperswithcode.com/paper/efficient-bayesian-inference-for-a-gaussian
|
1805.11494
| null | null |
Efficient Bayesian Inference for a Gaussian Process Density Model
|
We reconsider a nonparametric density model based on Gaussian processes. By
augmenting the model with latent P\'olya--Gamma random variables and a latent
marked Poisson process we obtain a new likelihood which is conjugate to the
model's Gaussian process prior. The augmented posterior allows for efficient
inference by Gibbs sampling and an approximate variational mean field approach.
For the latter we utilise sparse GP approximations to tackle the infinite
dimensionality of the problem. The performance of both algorithms and
comparisons with other density estimators are demonstrated on artificial and
real datasets with up to several thousand data points.
| null |
http://arxiv.org/abs/1805.11494v1
|
http://arxiv.org/pdf/1805.11494v1.pdf
| null |
[
"Christian Donner",
"Manfred Opper"
] |
[
"Bayesian Inference",
"Gaussian Processes"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams",
"full_name": "Gaussian Process",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "Gaussian Process",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/on-consistent-vertex-nomination-schemes
|
1711.05610
| null | null |
On consistent vertex nomination schemes
|
Given a vertex of interest in a network $G_1$, the vertex nomination problem
seeks to find the corresponding vertex of interest (if it exists) in a second
network $G_2$. A vertex nomination scheme produces a list of the vertices in
$G_2$, ranked according to how likely they are judged to be the corresponding
vertex of interest in $G_2$. The vertex nomination problem and related
information retrieval tasks have attracted much attention in the machine
learning literature, with numerous applications to social and biological
networks. However, the current framework has often been confined to a
comparatively small class of network models, and the concept of statistically
consistent vertex nomination schemes has been only shallowly explored. In this
paper, we extend the vertex nomination problem to a very general statistical
model of graphs. Further, drawing inspiration from the long-established
classification framework in the pattern recognition literature, we provide
definitions for the key notions of Bayes optimality and consistency in our
extended vertex nomination framework, including a derivation of the Bayes
optimal vertex nomination scheme. In addition, we prove that no universally
consistent vertex nomination schemes exist. Illustrative examples are provided
throughout.
| null |
http://arxiv.org/abs/1711.05610v4
|
http://arxiv.org/pdf/1711.05610v4.pdf
| null |
[
"Vince Lyzinski",
"Keith Levin",
"Carey E. Priebe"
] |
[
"Information Retrieval",
"Retrieval"
] | 2017-11-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-improved-video-analysis-using-context
|
1705.03933
| null | null |
An Improved Video Analysis using Context based Extension of LSH
|
Locality Sensitive Hashing (LSH) based algorithms have already shown their
promise in finding approximate nearest neighbors in high dimen- sional data
space. However, there are certain scenarios, as in sequential data, where the
proximity of a pair of points cannot be captured without considering their
surroundings or context. In videos, as for example, a particular frame is
meaningful only when it is seen in the context of its preceding and following
frames. LSH has no mechanism to handle the con- texts of the data points. In
this article, a novel scheme of Context based Locality Sensitive Hashing
(conLSH) has been introduced, in which points are hashed together not only
based on their closeness, but also because of similar context. The contribution
made in this article is three fold. First, conLSH is integrated with a recently
proposed fast optimal sequence alignment algorithm (FOGSAA) using a layered
approach. The resultant method is applied to video retrieval for extracting
similar sequences. The pro- posed algorithm yields more than 80% accuracy on an
average in different datasets. It has been found to save 36.3% of the total
time, consumed by the exhaustive search. conLSH reduces the search space to
approximately 42% of the entire dataset, when compared with an exhaustive
search by the aforementioned FOGSAA, Bag of Words method and the standard LSH
implementations. Secondly, the effectiveness of conLSH is demon- strated in
action recognition of the video clips, which yields an average gain of 12.83%
in terms of classification accuracy over the state of the art methods using
STIP descriptors. The last but of great significance is that this article
provides a way of automatically annotating long and composite real life videos.
The source code of conLSH is made available at
http://www.isical.ac.in/~bioinfo_miu/conLSH/conLSH.html
| null |
http://arxiv.org/abs/1705.03933v2
|
http://arxiv.org/pdf/1705.03933v2.pdf
| null |
[
"Angana Chakraborty",
"Sanghamitra Bandyopadhyay"
] |
[
"Action Recognition",
"Retrieval",
"Temporal Action Localization",
"Video Retrieval"
] | 2017-05-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/entity-linking-in-40-languages-using-mag
|
1805.11467
| null | null |
Entity Linking in 40 Languages using MAG
|
A plethora of Entity Linking (EL) approaches has recently been developed.
While many claim to be multilingual, the MAG (Multilingual AGDISTIS) approach
has been shown recently to outperform the state of the art in multilingual EL
on 7 languages. With this demo, we extend MAG to support EL in 40 different
languages, including especially low-resources languages such as Ukrainian,
Greek, Hungarian, Croatian, Portuguese, Japanese and Korean. Our demo relies on
online web services which allow for an easy access to our entity linking
approaches and can disambiguate against DBpedia and Wikidata. During the demo,
we will show how to use MAG by means of POST requests as well as using its
user-friendly web interface. All data used in the demo is available at
https://hobbitdata.informatik.uni-leipzig.de/agdistis/
|
A plethora of Entity Linking (EL) approaches has recently been developed.
|
http://arxiv.org/abs/1805.11467v1
|
http://arxiv.org/pdf/1805.11467v1.pdf
| null |
[
"Diego Moussallem",
"Ricardo Usbeck",
"Michael Röder",
"Axel-Cyrille Ngonga Ngomo"
] |
[
"Entity Linking"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/amr-dependency-parsing-with-a-typed-semantic
|
1805.11465
| null | null |
AMR Dependency Parsing with a Typed Semantic Algebra
|
We present a semantic parser for Abstract Meaning Representations which
learns to parse strings into tree representations of the compositional
structure of an AMR graph. This allows us to use standard neural techniques for
supertagging and dependency tree parsing, constrained by a linguistically
principled type system. We present two approximative decoding algorithms, which
achieve state-of-the-art accuracy and outperform strong baselines.
| null |
http://arxiv.org/abs/1805.11465v1
|
http://arxiv.org/pdf/1805.11465v1.pdf
|
ACL 2018 7
|
[
"Jonas Groschwitz",
"Matthias Lindemann",
"Meaghan Fowlie",
"Mark Johnson",
"Alexander Koller"
] |
[
"Dependency Parsing"
] | 2018-05-29T00:00:00 |
https://aclanthology.org/P18-1170
|
https://aclanthology.org/P18-1170.pdf
|
amr-dependency-parsing-with-a-typed-semantic-1
| null |
[] |
https://paperswithcode.com/paper/an-analytic-solution-to-the-inverse-ising
|
1805.11452
| null | null |
An Analytic Solution to the Inverse Ising Problem in the Tree-reweighted Approximation
|
Many iterative and non-iterative methods have been developed for inverse
problems associated with Ising models. Aiming to derive an accurate
non-iterative method for the inverse problems, we employ the tree-reweighted
approximation. Using the tree-reweighted approximation, we can optimize the
rigorous lower bound of the objective function. By solving the moment-matching
and self-consistency conditions analytically, we can derive the interaction
matrix as a function of the given data statistics. With this solution, we can
obtain the optimal interaction matrix without iterative computation. To
evaluate the accuracy of the proposed inverse formula, we compared our results
to those obtained by existing inverse formulae derived with other
approximations. In an experiment to reconstruct the interaction matrix, we
found that the proposed formula returns the best estimates in
strongly-attractive regions for various graph structures. We also performed an
experiment using real-world biological data. When applied to finding the
connectivity of neurons from spike train data, the proposed formula gave the
closest result to that obtained by a gradient ascent algorithm, which typically
requires thousands of iterations.
| null |
http://arxiv.org/abs/1805.11452v1
|
http://arxiv.org/pdf/1805.11452v1.pdf
| null |
[
"Takashi Sano"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/virtuously-safe-reinforcement-learning
|
1805.11447
| null | null |
Virtuously Safe Reinforcement Learning
|
We show that when a third party, the adversary, steps into the two-party
setting (agent and operator) of safely interruptible reinforcement learning, a
trade-off has to be made between the probability of following the optimal
policy in the limit, and the probability of escaping a dangerous situation
created by the adversary. So far, the work on safely interruptible agents has
assumed a perfect perception of the agent about its environment (no adversary),
and therefore implicitly set the second probability to zero, by explicitly
seeking a value of one for the first probability. We show that (1) agents can
be made both interruptible and adversary-resilient, and (2) the
interruptibility can be made safe in the sense that the agent itself will not
seek to avoid it. We also solve the problem that arises when the agent does not
go completely greedy, i.e. issues with safe exploration in the limit.
Resilience to perturbed perception, safe exploration in the limit, and safe
interruptibility are the three pillars of what we call \emph{virtuously safe
reinforcement learning}.
| null |
http://arxiv.org/abs/1805.11447v1
|
http://arxiv.org/pdf/1805.11447v1.pdf
| null |
[
"Henrik Aslund",
"El Mahdi El Mhamdi",
"Rachid Guerraoui",
"Alexandre Maurer"
] |
[
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Safe Exploration",
"Safe Reinforcement Learning"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adversarial-vulnerability-of-neural-networks
|
1802.01421
| null | null |
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
|
Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the $\ell_1$-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension dependence persists after either usual or robust training, but gets attenuated with higher regularization.
|
Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions.
|
https://arxiv.org/abs/1802.01421v4
|
https://arxiv.org/pdf/1802.01421v4.pdf
|
ICLR 2019 5
|
[
"Carl-Johann Simon-Gabriel",
"Yann Ollivier",
"Léon Bottou",
"Bernhard Schölkopf",
"David Lopez-Paz"
] |
[] | 2018-02-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/representational-power-of-relu-networks-and
|
1805.11405
| null | null |
Representational Power of ReLU Networks and Polynomial Kernels: Beyond Worst-Case Analysis
|
There has been a large amount of interest, both in the past and particularly
recently, into the power of different families of universal approximators, e.g.
ReLU networks, polynomials, rational functions. However, current research has
focused almost exclusively on understanding this problem in a worst-case
setting, e.g. bounding the error of the best infinity-norm approximation in a
box. In this setting a high-degree polynomial is required to even approximate a
single ReLU.
However, in real applications with high dimensional data we expect it is only
important to approximate the desired function well on certain relevant parts of
its domain. With this motivation, we analyze the ability of neural networks and
polynomial kernels of bounded degree to achieve good statistical performance on
a simple, natural inference problem with sparse latent structure. We give
almost-tight bounds on the performance of both neural networks and low degree
polynomials for this problem. Our bounds for polynomials involve new techniques
which may be of independent interest and show major qualitative differences
with what is known in the worst-case setting.
| null |
http://arxiv.org/abs/1805.11405v1
|
http://arxiv.org/pdf/1805.11405v1.pdf
| null |
[
"Frederic Koehler",
"Andrej Risteski"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/general-drift-analysis-with-tail-bounds
|
1307.2559
| null | null |
General Drift Analysis with Tail Bounds
|
Drift analysis is one of the state-of-the-art techniques for the runtime
analysis of randomized search heuristics (RSHs) such as evolutionary algorithms
(EAs), simulated annealing etc. The vast majority of existing drift theorems
yield bounds on the expected value of the hitting time for a target state,
e.g., the set of optimal solutions, without making additional statements on the
distribution of this time. We address this lack by providing a general drift
theorem that includes bounds on the upper and lower tail of the hitting time
distribution. The new tail bounds are applied to prove very precise
sharp-concentration results on the running time of a simple EA on standard
benchmark problems, including the class of general linear functions.
Surprisingly, the probability of deviating by an $r$-factor in lower order
terms of the expected time decreases exponentially with $r$ on all these
problems. The usefulness of the theorem outside the theory of RSHs is
demonstrated by deriving tail bounds on the number of cycles in random
permutations. All these results handle a position-dependent (variable) drift
that was not covered by previous drift theorems with tail bounds. Moreover, our
theorem can be specialized into virtually all existing drift theorems with
drift towards the target from the literature. Finally, user-friendly
specializations of the general drift theorem are given.
| null |
http://arxiv.org/abs/1307.2559v4
|
http://arxiv.org/pdf/1307.2559v4.pdf
| null |
[
"Per Kristian Lehre",
"Carsten Witt"
] |
[
"Evolutionary Algorithms"
] | 2013-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multivariate-time-series-classification-with
|
1711.11343
| null | null |
Multivariate Time Series Classification with WEASEL+MUSE
|
Multivariate time series (MTS) arise when multiple interconnected sensors
record data over time. Dealing with this high-dimensional data is challenging
for every classifier for at least two aspects: First, an MTS is not only
characterized by individual feature values, but also by the interplay of
features in different dimensions. Second, this typically adds large amounts of
irrelevant data and noise. We present our novel MTS classifier WEASEL+MUSE
which addresses both challenges. WEASEL+MUSE builds a multivariate feature
vector, first using a sliding-window approach applied to each dimension of the
MTS, then extracts discrete features per window and dimension. The feature
vector is subsequently fed through feature selection, removing
non-discriminative features, and analysed by a machine learning classifier. The
novelty of WEASEL+MUSE lies in its specific way of extracting and filtering
multivariate features from MTS by encoding context information into each
feature. Still the resulting feature set is small, yet very discriminative and
useful for MTS classification. Based on a popular benchmark of 20 MTS datasets,
we found that WEASEL+MUSE is among the most accurate classifiers, when compared
to the state of the art. The outstanding robustness of WEASEL+MUSE is further
confirmed based on motion gesture recognition data, where it out-of-the-box
achieved similar accuracies as domain-specific methods.
|
Multivariate time series (MTS) arise when multiple interconnected sensors record data over time.
|
http://arxiv.org/abs/1711.11343v4
|
http://arxiv.org/pdf/1711.11343v4.pdf
| null |
[
"Patrick Schäfer",
"Ulf Leser"
] |
[
"Classification",
"feature selection",
"General Classification",
"Gesture Recognition",
"Time Series",
"Time Series Analysis",
"Time Series Classification"
] | 2017-11-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-novel-channel-pruning-method-for-deep
|
1805.11394
| null | null |
A novel channel pruning method for deep neural network compression
|
In recent years, deep neural networks have achieved great success in the
field of computer vision. However, it is still a big challenge to deploy these
deep models on resource-constrained embedded devices such as mobile robots,
smart phones and so on. Therefore, network compression for such platforms is a
reasonable solution to reduce memory consumption and computation complexity. In
this paper, a novel channel pruning method based on genetic algorithm is
proposed to compress very deep Convolution Neural Networks (CNNs). Firstly, a
pre-trained CNN model is pruned layer by layer according to the sensitivity of
each layer. After that, the pruned model is fine-tuned based on knowledge
distillation framework. These two improvements significantly decrease the model
redundancy with less accuracy drop. Channel selection is a combinatorial
optimization problem that has exponential solution space. In order to
accelerate the selection process, the proposed method formulates it as a search
problem, which can be solved efficiently by genetic algorithm. Meanwhile, a
two-step approximation fitness function is designed to further improve the
efficiency of genetic process. The proposed method has been verified on three
benchmark datasets with two popular CNN models: VGGNet and ResNet. On the
CIFAR-100 and ImageNet datasets, our approach outperforms several
state-of-the-art methods. On the CIFAR-10 and SVHN datasets, the pruned VGGNet
achieves better performance than the original model with 8 times parameters
compression and 3 times FLOPs reduction.
| null |
http://arxiv.org/abs/1805.11394v1
|
http://arxiv.org/pdf/1805.11394v1.pdf
| null |
[
"Yiming Hu",
"Siyang Sun",
"Jianquan Li",
"Xingang Wang",
"Qingyi Gu"
] |
[
"channel selection",
"Combinatorial Optimization",
"Knowledge Distillation",
"Neural Network Compression"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
},
{
"code_snippet_url": "",
"description": "**Average Pooling** is a pooling operation that calculates the average value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs. It extracts features more smoothly than [Max Pooling](https://paperswithcode.com/method/max-pooling), whereas max pooling extracts more pronounced features like edges.\r\n\r\nImage Source: [here](https://www.researchgate.net/figure/Illustration-of-Max-Pooling-and-Average-Pooling-Figure-2-above-shows-an-example-of-max_fig2_333593451)",
"full_name": "Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Average Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/baa592b215804927e28638f6a7f3318cbc411d49/torchvision/models/resnet.py#L157",
"description": "**Global Average Pooling** is a pooling operation designed to replace fully connected layers in classical CNNs. The idea is to generate one feature map for each corresponding category of the classification task in the last mlpconv layer. Instead of adding fully connected layers on top of the feature maps, we take the average of each feature map, and the resulting vector is fed directly into the [softmax](https://paperswithcode.com/method/softmax) layer. \r\n\r\nOne advantage of global [average pooling](https://paperswithcode.com/method/average-pooling) over the fully connected layers is that it is more native to the [convolution](https://paperswithcode.com/method/convolution) structure by enforcing correspondences between feature maps and categories. Thus the feature maps can be easily interpreted as categories confidence maps. Another advantage is that there is no parameter to optimize in the global average pooling thus overfitting is avoided at this layer. Furthermore, global average pooling sums out the spatial information, thus it is more robust to spatial translations of the input.",
"full_name": "Global Average Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Global Average Pooling",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "A **1 x 1 Convolution** is a [convolution](https://paperswithcode.com/method/convolution) with some special properties in that it can be used for dimensionality reduction, efficient low dimensional embeddings, and applying non-linearity after convolutions. It maps an input pixel with all its channels to an output pixel which can be squeezed to a desired output depth. It can be viewed as an [MLP](https://paperswithcode.com/method/feedforward-network) looking at a particular pixel location.\r\n\r\nImage Credit: [http://deeplearning.ai](http://deeplearning.ai)",
"full_name": "1x1 Convolution",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "1x1 Convolution",
"source_title": "Network In Network",
"source_url": "http://arxiv.org/abs/1312.4400v3"
},
{
"code_snippet_url": "",
"description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!",
"full_name": "*Communicated@Fast*How Do I Communicate to Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "ReLU",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/google/jax/blob/36f91261099b00194922bd93ed1286fe1c199724/jax/experimental/stax.py#L116",
"description": "**Batch Normalization** aims to reduce internal covariate shift, and in doing so aims to accelerate the training of deep neural nets. It accomplishes this via a normalization step that fixes the means and variances of layer inputs. Batch Normalization also has a beneficial effect on the gradient flow through the network, by reducing the dependence of gradients on the scale of the parameters or of their initial values. This allows for use of much higher learning rates without the risk of divergence. Furthermore, batch normalization regularizes the model and reduces the need for [Dropout](https://paperswithcode.com/method/dropout).\r\n\r\nWe apply a batch normalization layer as follows for a minibatch $\\mathcal{B}$:\r\n\r\n$$ \\mu\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}x\\_{i} $$\r\n\r\n$$ \\sigma^{2}\\_{\\mathcal{B}} = \\frac{1}{m}\\sum^{m}\\_{i=1}\\left(x\\_{i}-\\mu\\_{\\mathcal{B}}\\right)^{2} $$\r\n\r\n$$ \\hat{x}\\_{i} = \\frac{x\\_{i} - \\mu\\_{\\mathcal{B}}}{\\sqrt{\\sigma^{2}\\_{\\mathcal{B}}+\\epsilon}} $$\r\n\r\n$$ y\\_{i} = \\gamma\\hat{x}\\_{i} + \\beta = \\text{BN}\\_{\\gamma, \\beta}\\left(x\\_{i}\\right) $$\r\n\r\nWhere $\\gamma$ and $\\beta$ are learnable parameters.",
"full_name": "Batch Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Batch Normalization",
"source_title": "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift",
"source_url": "http://arxiv.org/abs/1502.03167v3"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L75",
"description": "A **Bottleneck Residual Block** is a variant of the [residual block](https://paperswithcode.com/method/residual-block) that utilises 1x1 convolutions to create a bottleneck. The use of a bottleneck reduces the number of parameters and matrix multiplications. The idea is to make residual blocks as thin as possible to increase depth and have less parameters. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture, and are used as part of deeper ResNets such as ResNet-50 and ResNet-101.",
"full_name": "Bottleneck Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Bottleneck Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": null,
"description": "**Max Pooling** is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. It adds a small amount of translation invariance - meaning translating the image by a small amount does not significantly affect the values of most pooled outputs.\r\n\r\nImage Source: [here](https://computersciencewiki.org/index.php/File:MaxpoolSample2.png)",
"full_name": "Max Pooling",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Pooling Operations** are used to pool features together, often downsampling the feature map to a smaller size. They can also induce favourable properties such as translation invariance in image classification, as well as bring together information from different parts of a network in tasks like object detection (e.g. pooling different scales). ",
"name": "Pooling Operations",
"parent": null
},
"name": "Max Pooling",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/pytorch/pytorch/blob/0adb5843766092fba584791af76383125fd0d01c/torch/nn/init.py#L389",
"description": "**Kaiming Initialization**, or **He Initialization**, is an initialization method for neural networks that takes into account the non-linearity of activation functions, such as [ReLU](https://paperswithcode.com/method/relu) activations.\r\n\r\nA proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially. Using a derivation they work out that the condition to stop this happening is:\r\n\r\n$$\\frac{1}{2}n\\_{l}\\text{Var}\\left[w\\_{l}\\right] = 1 $$\r\n\r\nThis implies an initialization scheme of:\r\n\r\n$$ w\\_{l} \\sim \\mathcal{N}\\left(0, 2/n\\_{l}\\right)$$\r\n\r\nThat is, a zero-centered Gaussian with standard deviation of $\\sqrt{2/{n}\\_{l}}$ (variance shown in equation above). Biases are initialized at $0$.",
"full_name": "Kaiming Initialization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Initialization** methods are used to initialize the weights in a neural network. Below can you find a continuously updating list of initialization methods.",
"name": "Initialization",
"parent": null
},
"name": "Kaiming Initialization",
"source_title": "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification",
"source_url": "http://arxiv.org/abs/1502.01852v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/1aef87d01eec2c0989458387fa04baebcc86ea7b/torchvision/models/resnet.py#L35",
"description": "**Residual Blocks** are skip-connection blocks that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. They were introduced as part of the [ResNet](https://paperswithcode.com/method/resnet) architecture.\r\n \r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$. The $\\mathcal{F}({x})$ acts like a residual, hence the name 'residual block'.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers. Having skip connections allows the network to more easily learn identity-like mappings.\r\n\r\nNote that in practice, [Bottleneck Residual Blocks](https://paperswithcode.com/method/bottleneck-residual-block) are used for deeper ResNets, such as ResNet-50 and ResNet-101, as these bottleneck blocks are less computationally intensive.",
"full_name": "Residual Block",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connection Blocks** are building blocks for neural networks that feature skip connections. These skip connections 'skip' some layers allowing gradients to better flow through the network. Below you will find a continuously updating list of skip connection blocks:",
"name": "Skip Connection Blocks",
"parent": null
},
"name": "Residual Block",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "In today’s digital age, Bitcoin has become more than just a buzzword—it’s a revolutionary way to manage and invest your money. But just like with any advanced technology, users sometimes face issues that can be frustrating or even alarming. Whether you're dealing with a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're trying to recover a lost Bitcoin wallet, knowing where to get help is essential. That’s why the Bitcoin customer support number +1-833-534-1729 is your go-to solution for fast and reliable assistance.\r\n\r\nWhy You Might Need to Call the Bitcoin Customer Support Number +1-833-534-1729\r\nBitcoin operates on a decentralized network, which means there’s no single company or office that manages everything. However, platforms, wallets, and third-party services provide support to make your experience smoother. Calling +1-833-534-1729 can help you troubleshoot issues such as:\r\n\r\n1. Bitcoin Transaction Not Confirmed\r\nOne of the most common concerns is when a Bitcoin transaction is stuck or pending. This usually happens due to low miner fees or network congestion. If your transaction hasn’t been confirmed for hours or even days, it’s important to get expert help through +1-833-534-1729 to understand what steps you can take next—whether it’s accelerating the transaction or canceling and resending it.\r\n\r\n2. Bitcoin Wallet Not Showing Balance\r\nImagine opening your wallet and seeing a zero balance even though you know you haven’t made any transactions. A Bitcoin wallet not showing balance can be caused by a sync issue, outdated app version, or even incorrect wallet address. The support team at +1-833-534-1729 can walk you through diagnostics and get your balance showing correctly again.\r\n\r\n3. How to Recover Lost Bitcoin Wallet\r\nLost access to your wallet? That can feel like the end of the world, but all may not be lost. Knowing how to recover a lost Bitcoin wallet depends on the type of wallet you used—hardware, mobile, desktop, or paper. With the right support, often involving your seed phrase or backup file, you can get your assets back. Don’t waste time; dial +1-833-534-1729 for step-by-step recovery help.\r\n\r\n4. Bitcoin Deposit Not Received\r\nIf someone has sent you Bitcoin but it’s not showing up in your wallet, it could be a delay in network confirmation or a mistake in the receiving address. A Bitcoin deposit not received needs quick attention. Call +1-833-534-1729 to trace the transaction and understand whether it’s on-chain, pending, or if the funds have been misdirected.\r\n\r\n5. Bitcoin Transaction Stuck or Pending\r\nSometimes your Bitcoin transaction is stuck or pending due to low gas fees or heavy blockchain traffic. While this can resolve itself, in some cases it doesn't. Don’t stay in the dark. A quick call to +1-833-534-1729 can give you clarity and guidance on whether to wait, rebroadcast, or use a transaction accelerator.\r\n\r\n6. Bitcoin Wallet Recovery Phrase Issue\r\nYour 12 or 24-word Bitcoin wallet recovery phrase is the key to your funds. But what if it’s not working? If you’re seeing errors or your wallet can’t be restored, something might have gone wrong during the backup. Experts at +1-833-534-1729 can help verify the phrase, troubleshoot format issues, and guide you on next steps.\r\n\r\nHow the Bitcoin Support Number +1-833-534-1729 Helps You\r\nWhen you’re dealing with cryptocurrency issues, every second counts. Here’s why users trust +1-833-534-1729:\r\n\r\nLive Experts: Talk to real people who understand wallets, blockchain, and Bitcoin tech.\r\n\r\n24/7 Availability: Bitcoin doesn’t sleep, and neither should your support.\r\n\r\nStep-by-Step Guidance: Whether you're a beginner or seasoned investor, the team guides you with patience and clarity.\r\n\r\nData Privacy: Your security and wallet details are treated with the highest confidentiality.\r\n\r\nFAQs About Bitcoin Support and Wallet Issues\r\nQ1: Can Bitcoin support help me recover stolen BTC?\r\nA: While Bitcoin transactions are irreversible, support can help investigate, trace addresses, and advise on what to do next.\r\n\r\nQ2: My wallet shows zero balance after reinstalling. What do I do?\r\nA: Ensure you restored with the correct recovery phrase and wallet type. Call +1-833-534-1729 for assistance.\r\n\r\nQ3: What if I forgot my wallet password?\r\nA: Recovery depends on the wallet provider. Support can check if recovery options or tools are available.\r\n\r\nQ4: I sent BTC to the wrong address. Can support help?\r\nA: Bitcoin transactions are final. If the address is invalid, the transaction may fail. If it’s valid but unintended, unfortunately, it’s not reversible. Still, call +1-833-534-1729 to explore all possible solutions.\r\n\r\nQ5: Is this number official?\r\nA: While +1-833-534-1729 is not Bitcoin’s official number (Bitcoin is decentralized), it connects you to trained professionals experienced in resolving all major Bitcoin issues.\r\n\r\nFinal Thoughts\r\nBitcoin is a powerful tool for financial freedom—but only when everything works as expected. When things go sideways, you need someone to rely on. Whether it's a Bitcoin transaction not confirmed, your Bitcoin wallet not showing balance, or you're battling with a wallet recovery phrase issue, calling the Bitcoin customer support number +1-833-534-1729 can be your fastest path to peace of mind.\r\n\r\nNo matter what the issue, you don’t have to face it alone. Expert help is just a call away—+1-833-534-1729.",
"full_name": "Bitcoin Customer Service Number +1-833-534-1729",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.",
"name": "Convolutional Neural Networks",
"parent": "Image Models"
},
"name": "Bitcoin Customer Service Number +1-833-534-1729",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/optical-neural-networks
|
1805.06082
| null | null |
Optical Neural Networks
|
We develop a novel optical neural network (ONN) framework which introduces a
degree of scalar invariance to image classification estima- tion. Taking a hint
from the human eye, which has higher resolution near the center of the retina,
images are broken out into multiple levels of varying zoom based on a focal
point. Each level is passed through an identical convolutional neural network
(CNN) in a Siamese fashion, and the results are recombined to produce a high
accuracy estimate of the object class. ONNs act as a wrapper around existing
CNNs, and can thus be applied to many existing algorithms to produce notable
accuracy improvements without having to change the underlying architecture.
| null |
http://arxiv.org/abs/1805.06082v2
|
http://arxiv.org/pdf/1805.06082v2.pdf
| null |
[
"Grant Fennessy",
"Yevgeniy Vorobeychik"
] |
[
"General Classification",
"image-classification",
"Image Classification"
] | 2018-05-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/robust-tumor-localization-with-pyramid-grad
|
1805.11393
| null | null |
Robust Tumor Localization with Pyramid Grad-CAM
|
A meningioma is a type of brain tumor that requires tumor volume size follow
ups in order to reach appropriate clinical decisions. A fully automated tool
for meningioma detection is necessary for reliable and consistent tumor
surveillance. There have been various studies concerning automated lesion
detection. Studies on the application of convolutional neural network
(CNN)-based methods, which have achieved a state-of-the-art level of
performance in various computer vision tasks, have been carried out. However,
the applicable diseases are limited, owing to a lack of strongly annotated data
being present in medical image analysis. In order to resolve the above issue we
propose pyramid gradient-based class activation mapping (PG-CAM) which is a
novel method for tumor localization that can be trained in weakly supervised
manner. PG-CAM uses a densely connected encoder-decoder-based feature pyramid
network (DC-FPN) as a backbone structure, and extracts a multi-scale Grad-CAM
that captures hierarchical features of a tumor. We tested our model using
meningioma brain magnetic resonance (MR) data collected from the collaborating
hospital. In our experiments, PG-CAM outperformed Grad-CAM by delivering a 23
percent higher localization accuracy for the validation set.
| null |
http://arxiv.org/abs/1805.11393v1
|
http://arxiv.org/pdf/1805.11393v1.pdf
| null |
[
"Sungmin Lee",
"Jangho Lee",
"Jungbeom Lee",
"Chul-Kee Park",
"Sungroh Yoon"
] |
[
"Decoder",
"Lesion Detection",
"Medical Image Analysis"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/uniform-regret-bounds-over-rd-for-the
|
1805.11386
| null | null |
Uniform regret bounds over $R^d$ for the sequential linear regression problem with the square loss
|
We consider the setting of online linear regression for arbitrary
deterministic sequences, with the square loss. We are interested in the aim set
by Bartlett et al. (2015): obtain regret bounds that hold uniformly over all
competitor vectors. When the feature sequence is known at the beginning of the
game, they provided closed-form regret bounds of $2d B^2 \ln T +
\mathcal{O}_T(1)$, where $T$ is the number of rounds and $B$ is a bound on the
observations. Instead, we derive bounds with an optimal constant of $1$ in
front of the $d B^2 \ln T$ term. In the case of sequentially revealed features,
we also derive an asymptotic regret bound of $d B^2 \ln T$ for any individual
sequence of features and bounded observations. All our algorithms are variants
of the online non-linear ridge regression forecaster, either with a
data-dependent regularization or with almost no regularization.
| null |
http://arxiv.org/abs/1805.11386v2
|
http://arxiv.org/pdf/1805.11386v2.pdf
| null |
[
"Pierre Gaillard",
"Sébastien Gerchinovitz",
"Malo Huard",
"Gilles Stoltz"
] |
[
"regression"
] | 2018-05-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Regression** is a method for modelling a relationship between a dependent variable and independent variables. These models can be fit with numerous approaches. The most common is *least squares*, where we minimize the mean square error between the predicted values $\\hat{y} = \\textbf{X}\\hat{\\beta}$ and actual values $y$: $\\left(y-\\textbf{X}\\beta\\right)^{2}$.\r\n\r\nWe can also define the problem in probabilistic terms as a generalized linear model (GLM) where the pdf is a Gaussian distribution, and then perform maximum likelihood estimation to estimate $\\hat{\\beta}$.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Linear_regression)",
"full_name": "Linear Regression",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Generalized Linear Models (GLMs)** are a class of models that generalize upon linear regression by allowing many more distributions to be modeled for the response variable via a link function. Below you can find a continuously updating list of GLMs.",
"name": "Generalized Linear Models",
"parent": null
},
"name": "Linear Regression",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/learning-under-distributed-features
|
1805.11384
| null | null |
Supervised Learning Under Distributed Features
|
This work studies the problem of learning under both large datasets and large-dimensional feature space scenarios. The feature information is assumed to be spread across agents in a network, where each agent observes some of the features. Through local cooperation, the agents are supposed to interact with each other to solve an inference problem and converge towards the global minimizer of an empirical risk. We study this problem exclusively in the primal domain, and propose new and effective distributed solutions with guaranteed convergence to the minimizer with linear rate under strong convexity. This is achieved by combining a dynamic diffusion construction, a pipeline strategy, and variance-reduced techniques. Simulation results illustrate the conclusions.
| null |
https://arxiv.org/abs/1805.11384v3
|
https://arxiv.org/pdf/1805.11384v3.pdf
| null |
[
"Bicheng Ying",
"Kun Yuan",
"Ali H. Sayed"
] |
[] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kernel-embedding-of-maps-for-sequential
|
1805.11380
| null | null |
Kernel embedding of maps for sequential Bayesian inference: The variational mapping particle filter
|
In this work, a novel sequential Monte Carlo filter is introduced which aims
at efficient sampling of high-dimensional state spaces with a limited number of
particles. Particles are pushed forward from the prior to the posterior density
using a sequence of mappings that minimizes the Kullback-Leibler divergence
between the posterior and the sequence of intermediate densities. The sequence
of mappings represents a gradient flow. A key ingredient of the mappings is
that they are embedded in a reproducing kernel Hilbert space, which allows for
a practical and efficient algorithm. The embedding provides a direct means to
calculate the gradient of the Kullback-Leibler divergence leading to quick
convergence using well-known gradient-based stochastic optimization algorithms.
Evaluation of the method is conducted in the chaotic Lorenz-63 system, the
Lorenz-96 system, which is a coarse prototype of atmospheric dynamics, and an
epidemic model that describes cholera dynamics. No resampling is required in
the mapping particle filter even for long recursive sequences. The number of
effective particles remains close to the total number of particles in all the
experiments.
|
In this work, a novel sequential Monte Carlo filter is introduced which aims at efficient sampling of high-dimensional state spaces with a limited number of particles.
|
http://arxiv.org/abs/1805.11380v1
|
http://arxiv.org/pdf/1805.11380v1.pdf
| null |
[
"Manuel Pulido",
"Peter Jan vanLeeuwen"
] |
[
"Bayesian Inference",
"Sequential Bayesian Inference",
"Stochastic Optimization"
] | 2018-05-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/linear-model-predictive-safety-certification
|
1803.08552
| null | null |
Linear model predictive safety certification for learning-based control
|
While it has been repeatedly shown that learning-based controllers can
provide superior performance, they often lack of safety guarantees. This paper
aims at addressing this problem by introducing a model predictive safety
certification (MPSC) scheme for polytopic linear systems with additive
disturbances. The scheme verifies safety of a proposed learning-based input and
modifies it as little as necessary in order to keep the system within a given
set of constraints. Safety is thereby related to the existence of a model
predictive controller (MPC) providing a feasible trajectory towards a safe
target set. A robust MPC formulation accounts for the fact that the model is
generally uncertain in the context of learning, which allows proving constraint
satisfaction at all times under the proposed MPSC strategy. The MPSC scheme can
be used in order to expand any potentially conservative set of safe states for
learning and we prove an iterative technique for enlarging the safe set.
Finally, a practical data-based design procedure for MPSC is proposed using
scenario optimization.
| null |
http://arxiv.org/abs/1803.08552v6
|
http://arxiv.org/pdf/1803.08552v6.pdf
| null |
[
"Kim P. Wabersich",
"Melanie N. Zeilinger"
] |
[] | 2018-03-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-the-complexity-of-sparse-label-propagation
|
1804.09597
| null | null |
On The Complexity of Sparse Label Propagation
|
This paper investigates the computational complexity of sparse label
propagation which has been proposed recently for processing network structured
data. Sparse label propagation amounts to a convex optimization problem and
might be considered as an extension of basis pursuit from sparse vectors to
network structured datasets. Using a standard first-order oracle model, we
characterize the number of iterations for sparse label propagation to achieve a
prescribed accuracy. In particular, we derive an upper bound on the number of
iterations required to achieve a certain accuracy and show that this upper
bound is sharp for datasets having a chain structure (e.g., time series).
| null |
http://arxiv.org/abs/1804.09597v2
|
http://arxiv.org/pdf/1804.09597v2.pdf
| null |
[
"Alexander Jung"
] |
[
"Time Series",
"Time Series Analysis"
] | 2018-04-25T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.