paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/refedit-a-benchmark-and-method-for-improving
|
2506.03448
| null | null |
RefEdit: A Benchmark and Method for Improving Instruction-based Image Editing Model on Referring Expressions
|
Despite recent advances in inversion and instruction-based image editing, existing approaches primarily excel at editing single, prominent objects but significantly struggle when applied to complex scenes containing multiple entities. To quantify this gap, we first introduce RefEdit-Bench, a rigorous real-world benchmark rooted in RefCOCO, where even baselines trained on millions of samples perform poorly. To overcome this limitation, we introduce RefEdit -- an instruction-based editing model trained on our scalable synthetic data generation pipeline. Our RefEdit, trained on only 20,000 editing triplets, outperforms the Flux/SD3 model-based baselines trained on millions of data. Extensive evaluations across various benchmarks demonstrate that our model not only excels in referring expression tasks but also enhances performance on traditional benchmarks, achieving state-of-the-art results comparable to closed-source methods. We release data \& checkpoint for reproducibility.
| null |
https://arxiv.org/abs/2506.03448v1
|
https://arxiv.org/pdf/2506.03448v1.pdf
| null |
[
"Bimsara Pathiraja",
"Maitreya Patel",
"Shivam Singh",
"Yezhou Yang",
"Chitta Baral"
] |
[
"Referring Expression",
"Synthetic Data Generation"
] | 2025-06-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sharegpt-4o-image-aligning-multimodal-models
|
2506.18095
| null | null |
ShareGPT-4o-Image: Aligning Multimodal Models with GPT-4o-Level Image Generation
|
Recent advances in multimodal generative models have unlocked photorealistic, instruction-aligned image generation, yet leading systems like GPT-4o-Image remain proprietary and inaccessible. To democratize these capabilities, we present ShareGPT-4o-Image, the first dataset comprising 45K text-to-image and 46K text-and-image-to-image data, all synthesized using GPT-4o's image generation capabilities for distilling its advanced image generation abilities. Leveraging this dataset, we develop Janus-4o, a multimodal large language model capable of both text-to-image and text-and-image-to-image generation. Janus-4o not only significantly improves text-to-image generation over its predecessor, Janus-Pro, but also newly supports text-and-image-to-image generation. Notably, it achieves impressive performance in text-and-image-to-image generation from scratch, using only 91K synthetic samples and 6 hours of training on an 8 A800-GPU machine. We hope the release of ShareGPT-4o-Image and Janus-4o will foster open research in photorealistic, instruction-aligned image generation.
|
Recent advances in multimodal generative models have unlocked photorealistic, instruction-aligned image generation, yet leading systems like GPT-4o-Image remain proprietary and inaccessible.
|
https://arxiv.org/abs/2506.18095v1
|
https://arxiv.org/pdf/2506.18095v1.pdf
| null |
[
"Junying Chen",
"Zhenyang Cai",
"Pengcheng Chen",
"Shunian Chen",
"Ke Ji",
"Xidong Wang",
"Yunjin Yang",
"Benyou Wang"
] |
[
"GPU",
"Image Generation",
"Language Modeling",
"Language Modelling",
"Large Language Model",
"Multimodal Large Language Model",
"Text to Image Generation",
"Text-to-Image Generation"
] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-comprehensive-survey-on-continual-learning
|
2506.13045
| null | null |
A Comprehensive Survey on Continual Learning in Generative Models
|
The rapid advancement of generative models has enabled modern AI systems to comprehend and produce highly sophisticated content, even achieving human-level performance in specific domains. However, these models remain fundamentally constrained by catastrophic forgetting - a persistent challenge where adapting to new tasks typically leads to significant degradation in performance on previously learned tasks. To address this practical limitation, numerous approaches have been proposed to enhance the adaptability and scalability of generative models in real-world applications. In this work, we present a comprehensive survey of continual learning methods for mainstream generative models, including large language models, multimodal large language models, vision language action models, and diffusion models. Drawing inspiration from the memory mechanisms of the human brain, we systematically categorize these approaches into three paradigms: architecture-based, regularization-based, and replay-based methods, while elucidating their underlying methodologies and motivations. We further analyze continual learning setups for different generative models, including training objectives, benchmarks, and core backbones, offering deeper insights into the field. The project page of this paper is available at https://github.com/Ghy0501/Awesome-Continual-Learning-in-Generative-Models.
|
The rapid advancement of generative models has enabled modern AI systems to comprehend and produce highly sophisticated content, even achieving human-level performance in specific domains.
|
https://arxiv.org/abs/2506.13045v3
|
https://arxiv.org/pdf/2506.13045v3.pdf
| null |
[
"Haiyang Guo",
"Fanhu Zeng",
"Fei Zhu",
"Jiayi Wang",
"Xukai Wang",
"Jingang Zhou",
"Hongbo Zhao",
"Wenzhuo LIU",
"Shijie Ma",
"Da-Han Wang",
"Xu-Yao Zhang",
"Cheng-Lin Liu"
] |
[
"Continual Learning",
"Survey",
"Vision-Language-Action"
] | 2025-06-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/the-importance-of-being-lazy-scaling-limits
|
2506.16884
| null | null |
The Importance of Being Lazy: Scaling Limits of Continual Learning
|
Despite recent efforts, neural networks still struggle to learn in non-stationary environments, and our understanding of catastrophic forgetting (CF) is far from complete. In this work, we perform a systematic study on the impact of model scale and the degree of feature learning in continual learning. We reconcile existing contradictory observations on scale in the literature, by differentiating between lazy and rich training regimes through a variable parameterization of the architecture. We show that increasing model width is only beneficial when it reduces the amount of feature learning, yielding more laziness. Using the framework of dynamical mean field theory, we then study the infinite width dynamics of the model in the feature learning regime and characterize CF, extending prior theoretical results limited to the lazy regime. We study the intricate relationship between feature learning, task non-stationarity, and forgetting, finding that high feature learning is only beneficial with highly similar tasks. We identify a transition modulated by task similarity where the model exits an effectively lazy regime with low forgetting to enter a rich regime with significant forgetting. Finally, our findings reveal that neural networks achieve optimal performance at a critical level of feature learning, which depends on task non-stationarity and transfers across model scales. This work provides a unified perspective on the role of scale and feature learning in continual learning.
| null |
https://arxiv.org/abs/2506.16884v1
|
https://arxiv.org/pdf/2506.16884v1.pdf
| null |
[
"Jacopo Graldi",
"Alessandro Breccia",
"Giulia Lanzillotta",
"Thomas Hofmann",
"Lorenzo Noci"
] |
[
"Continual Learning"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/self-composing-policies-for-scalable
|
2506.14811
| null | null |
Self-Composing Policies for Scalable Continual Reinforcement Learning
|
This work introduces a growable and modular neural network architecture that naturally avoids catastrophic forgetting and interference in continual reinforcement learning. The structure of each module allows the selective combination of previous policies along with its internal policy, accelerating the learning process on the current task. Unlike previous growing neural network approaches, we show that the number of parameters of the proposed approach grows linearly with respect to the number of tasks, and does not sacrifice plasticity to scale. Experiments conducted in benchmark continuous control and visual problems reveal that the proposed approach achieves greater knowledge transfer and performance than alternative methods.
| null |
https://arxiv.org/abs/2506.14811v1
|
https://arxiv.org/pdf/2506.14811v1.pdf
| null |
[
"Mikel Malagón",
"Josu Ceberio",
"Jose A. Lozano"
] |
[
"continuous-control",
"Continuous Control",
"reinforcement-learning",
"Reinforcement Learning",
"Transfer Learning"
] | 2025-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lettingo-explore-user-profile-generation-for
|
2506.18309
| null | null |
LettinGo: Explore User Profile Generation for Recommendation System
|
User profiling is pivotal for recommendation systems, as it transforms raw user interaction data into concise and structured representations that drive personalized recommendations. While traditional embedding-based profiles lack interpretability and adaptability, recent advances with large language models (LLMs) enable text-based profiles that are semantically richer and more transparent. However, existing methods often adhere to fixed formats that limit their ability to capture the full diversity of user behaviors. In this paper, we introduce LettinGo, a novel framework for generating diverse and adaptive user profiles. By leveraging the expressive power of LLMs and incorporating direct feedback from downstream recommendation tasks, our approach avoids the rigid constraints imposed by supervised fine-tuning (SFT). Instead, we employ Direct Preference Optimization (DPO) to align the profile generator with task-specific performance, ensuring that the profiles remain adaptive and effective. LettinGo operates in three stages: (1) exploring diverse user profiles via multiple LLMs, (2) evaluating profile quality based on their impact in recommendation systems, and (3) aligning the profile generation through pairwise preference data derived from task performance. Experimental results demonstrate that our framework significantly enhances recommendation accuracy, flexibility, and contextual awareness. This work enhances profile generation as a key innovation for next-generation recommendation systems.
| null |
https://arxiv.org/abs/2506.18309v1
|
https://arxiv.org/pdf/2506.18309v1.pdf
| null |
[
"Lu Wang",
"Di Zhang",
"Fangkai Yang",
"Pu Zhao",
"Jianfeng Liu",
"Yuefeng Zhan",
"Hao Sun",
"QIngwei Lin",
"Weiwei Deng",
"Dongmei Zhang",
"Feng Sun",
"Qi Zhang"
] |
[
"Profile Generation",
"Recommendation Systems"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
}
] |
https://paperswithcode.com/paper/corona-a-coarse-to-fine-framework-for-graph
|
2506.17281
| null | null |
CORONA: A Coarse-to-Fine Framework for Graph-based Recommendation with Large Language Models
|
Recommender systems (RSs) are designed to retrieve candidate items a user might be interested in from a large pool. A common approach is using graph neural networks (GNNs) to capture high-order interaction relationships. As large language models (LLMs) have shown strong capabilities across domains, researchers are exploring their use to enhance recommendation. However, prior work limits LLMs to re-ranking results or dataset augmentation, failing to utilize their power during candidate filtering - which may lead to suboptimal performance. Instead, we propose to leverage LLMs' reasoning abilities during the candidate filtering process, and introduce Chain Of Retrieval ON grAphs (CORONA) to progressively narrow down the range of candidate items on interaction graphs with the help of LLMs: (1) First, LLM performs preference reasoning based on user profiles, with the response serving as a query to extract relevant users and items from the interaction graph as preference-assisted retrieval; (2) Then, using the information retrieved in the previous step along with the purchase history of target user, LLM conducts intent reasoning to help refine an even smaller interaction subgraph as intent-assisted retrieval; (3) Finally, we employ a GNN to capture high-order collaborative filtering information from the extracted subgraph, performing GNN-enhanced retrieval to generate the final recommendation results. The proposed framework leverages the reasoning capabilities of LLMs during the retrieval process, while seamlessly integrating GNNs to enhance overall recommendation performance. Extensive experiments on various datasets and settings demonstrate that our proposed CORONA achieves state-of-the-art performance with an 18.6% relative improvement in recall and an 18.4% relative improvement in NDCG on average.
| null |
https://arxiv.org/abs/2506.17281v1
|
https://arxiv.org/pdf/2506.17281v1.pdf
| null |
[
"Junze Chen",
"Xinjie Yang",
"Cheng Yang",
"Junfei Bao",
"Zeyuan Guo",
"Yawen Li",
"Chuan Shi"
] |
[
"Collaborative Filtering",
"Recommendation Systems",
"Re-Ranking",
"Retrieval"
] | 2025-06-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discosg-towards-discourse-level-text-scene
|
2506.15583
| null | null |
DiscoSG: Towards Discourse-Level Text Scene Graph Parsing through Iterative Graph Refinement
|
Vision-Language Models (VLMs) now generate discourse-level, multi-sentence visual descriptions, challenging text scene graph parsers originally designed for single-sentence caption-to-graph mapping. Current approaches typically merge sentence-level parsing outputs for discourse input, often missing phenomena like cross-sentence coreference, resulting in fragmented graphs and degraded downstream VLM task performance. To address this, we introduce a new task, Discourse-level text Scene Graph parsing (DiscoSG), supported by our dataset DiscoSG-DS, which comprises 400 expert-annotated and 8,430 synthesised multi-sentence caption-graph pairs for images. Each caption averages 9 sentences, and each graph contains at least 3 times more triples than those in existing datasets. While fine-tuning large PLMs (i.e., GPT-4) on DiscoSG-DS improves SPICE by approximately 48% over the best sentence-merging baseline, high inference cost and restrictive licensing hinder its open-source use, and smaller fine-tuned PLMs struggle with complex graphs. We propose DiscoSG-Refiner, which drafts a base graph using one small PLM, then employs a second PLM to iteratively propose graph edits, reducing full-graph generation overhead. Using two Flan-T5-Base models, DiscoSG-Refiner still improves SPICE by approximately 30% over the best baseline while achieving 86 times faster inference than GPT-4. It also consistently improves downstream VLM tasks like discourse-level caption evaluation and hallucination detection. Code and data are available at: https://github.com/ShaoqLin/DiscoSG
|
To address this, we introduce a new task, Discourse-level text Scene Graph parsing (DiscoSG), supported by our dataset DiscoSG-DS, which comprises 400 expert-annotated and 8, 430 synthesised multi-sentence caption-graph pairs for images.
|
https://arxiv.org/abs/2506.15583v1
|
https://arxiv.org/pdf/2506.15583v1.pdf
| null |
[
"Shaoqing Lin",
"Chong Teng",
"Fei Li",
"Donghong Ji",
"Lizhen Qu",
"Zhuang Li"
] |
[
"Graph Generation",
"Hallucination",
"Sentence"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "",
"description": "**GPT-4** is a transformer based model pre-trained to predict the next token in a document.",
"full_name": "GPT-4",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "GPT-4",
"source_title": "GPT-4 Technical Report",
"source_url": "https://arxiv.org/abs/2303.08774v5"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/sam2-sgp-enhancing-sam2-for-medical-image
|
2506.19658
| null | null |
SAM2-SGP: Enhancing SAM2 for Medical Image Segmentation via Support-Set Guided Prompting
|
Although new vision foundation models such as Segment Anything Model 2 (SAM2) have significantly enhanced zero-shot image segmentation capabilities, reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks. Moreover, SAM2's performance in medical image segmentation was limited by the domain shift issue, since it was originally trained on natural images and videos. To address these challenges, we proposed SAM2 with support-set guided prompting (SAM2-SGP), a framework that eliminated the need for manual prompts. The proposed model leveraged the memory mechanism of SAM2 to generate pseudo-masks using image-mask pairs from a support set via a Pseudo-mask Generation (PMG) module. We further introduced a novel Pseudo-mask Attention (PMA) module, which used these pseudo-masks to automatically generate bounding boxes and enhance localized feature extraction by guiding attention to relevant areas. Furthermore, a low-rank adaptation (LoRA) strategy was adopted to mitigate the domain shift issue. The proposed framework was evaluated on both 2D and 3D datasets across multiple medical imaging modalities, including fundus photography, X-ray, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound. The results demonstrated a significant performance improvement over state-of-the-art models, such as nnUNet and SwinUNet, as well as foundation models, such as SAM2 and MedSAM2, underscoring the effectiveness of the proposed approach. Our code is publicly available at https://github.com/astlian9/SAM_Support.
|
Although new vision foundation models such as Segment Anything Model 2 (SAM2) have significantly enhanced zero-shot image segmentation capabilities, reliance on human-provided prompts poses significant challenges in adapting SAM2 to medical image segmentation tasks.
|
https://arxiv.org/abs/2506.19658v1
|
https://arxiv.org/pdf/2506.19658v1.pdf
| null |
[
"Yang Xing",
"Jiong Wu",
"Yuheng Bu",
"Kuang Gong"
] |
[
"Computed Tomography (CT)",
"Image Segmentation",
"Medical Image Segmentation",
"Semantic Segmentation"
] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/open-vocabulary-camouflaged-object-1
|
2506.19300
| null | null |
Open-Vocabulary Camouflaged Object Segmentation with Cascaded Vision Language Models
|
Open-Vocabulary Camouflaged Object Segmentation (OVCOS) seeks to segment and classify camouflaged objects from arbitrary categories, presenting unique challenges due to visual ambiguity and unseen categories.Recent approaches typically adopt a two-stage paradigm: first segmenting objects, then classifying the segmented regions using Vision Language Models (VLMs).However, these methods (1) suffer from a domain gap caused by the mismatch between VLMs' full-image training and cropped-region inference, and (2) depend on generic segmentation models optimized for well-delineated objects, making them less effective for camouflaged objects.Without explicit guidance, generic segmentation models often overlook subtle boundaries, leading to imprecise segmentation.In this paper,we introduce a novel VLM-guided cascaded framework to address these issues in OVCOS.For segmentation, we leverage the Segment Anything Model (SAM), guided by the VLM.Our framework uses VLM-derived features as explicit prompts to SAM, effectively directing attention to camouflaged regions and significantly improving localization accuracy.For classification, we avoid the domain gap introduced by hard cropping.Instead, we treat the segmentation output as a soft spatial prior via the alpha channel, which retains the full image context while providing precise spatial guidance, leading to more accurate and context-aware classification of camouflaged objects.The same VLM is shared across both segmentation and classification to ensure efficiency and semantic consistency.Extensive experiments on both OVCOS and conventional camouflaged object segmentation benchmarks demonstrate the clear superiority of our method, highlighting the effectiveness of leveraging rich VLM semantics for both segmentation and classification of camouflaged objects.
|
Open-Vocabulary Camouflaged Object Segmentation (OVCOS) seeks to segment and classify camouflaged objects from arbitrary categories, presenting unique challenges due to visual ambiguity and unseen categories. Recent approaches typically adopt a two-stage paradigm: first segmenting objects, then classifying the segmented regions using Vision Language Models (VLMs). However, these methods (1) suffer from a domain gap caused by the mismatch between VLMs' full-image training and cropped-region inference, and (2) depend on generic segmentation models optimized for well-delineated objects, making them less effective for camouflaged objects. Without explicit guidance, generic segmentation models often overlook subtle boundaries, leading to imprecise segmentation. In this paper, we introduce a novel VLM-guided cascaded framework to address these issues in OVCOS. For segmentation, we leverage the Segment Anything Model (SAM), guided by the VLM. Our framework uses VLM-derived features as explicit prompts to SAM, effectively directing attention to camouflaged regions and significantly improving localization accuracy. For classification, we avoid the domain gap introduced by hard cropping. Instead, we treat the segmentation output as a soft spatial prior via the alpha channel, which retains the full image context while providing precise spatial guidance, leading to more accurate and context-aware classification of camouflaged objects. The same VLM is shared across both segmentation and classification to ensure efficiency and semantic consistency. Extensive experiments on both OVCOS and conventional camouflaged object segmentation benchmarks demonstrate the clear superiority of our method, highlighting the effectiveness of leveraging rich VLM semantics for both segmentation and classification of camouflaged objects.
|
https://arxiv.org/abs/2506.19300v1
|
https://arxiv.org/pdf/2506.19300v1.pdf
| null |
[
"Kai Zhao",
"Wubang Yuan",
"Zheng Wang",
"Guanyi Li",
"Xiaoqiang Zhu",
"Deng-Ping Fan",
"Dan Zeng"
] |
[
"Camouflaged Object Segmentation",
"Segmentation",
"Semantic Segmentation"
] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
},
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "ADaptive gradient method with the OPTimal convergence rate",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.",
"name": "Stochastic Optimization",
"parent": "Optimization"
},
"name": "ADOPT",
"source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate",
"source_url": "https://arxiv.org/abs/2411.02853v3"
}
] |
https://paperswithcode.com/paper/scene-r1-video-grounded-large-language-models
|
2506.17545
| null | null |
Scene-R1: Video-Grounded Large Language Models for 3D Scene Reasoning without 3D Annotations
|
Currently, utilizing large language models to understand the 3D world is becoming popular. Yet existing 3D-aware LLMs act as black boxes: they output bounding boxes or textual answers without revealing how those decisions are made, and they still rely on pre-trained 3D detectors to supply object proposals. We introduce Scene-R1, a video-grounded framework that learns to reason about 3D scenes without any point-wise 3D instance supervision by pairing reinforcement-learning-driven reasoning with a two-stage grounding pipeline. In the temporal grounding stage, we explicitly reason about the video and select the video snippets most relevant to an open-ended query. In the subsequent image grounding stage, we analyze the image and predict the 2D bounding box. After that, we track the object using SAM2 to produce pixel-accurate masks in RGB frames, and project them back into 3D, thereby eliminating the need for 3D detector-based proposals while capturing fine geometry and material cues. Scene-R1 can also adapt to the 3D visual question answering task to answer free-form questions directly from video. Our training pipeline only needs task-level 2D boxes or textual labels without dense 3D point-wise labels. Scene-R1 surpasses existing open-vocabulary baselines on multiple datasets, while delivering transparent, step-by-step rationales. These results show that reinforcement-learning-based reasoning combined with RGB-D video alone offers a practical, annotation-efficient route to trustworthy 3D scene understanding.
| null |
https://arxiv.org/abs/2506.17545v1
|
https://arxiv.org/pdf/2506.17545v1.pdf
| null |
[
"Zhihao Yuan",
"Shuyi Jiang",
"Chun-Mei Feng",
"Yaolun Zhang",
"Shuguang Cui",
"Zhen Li",
"Na Zhao"
] |
[
"Question Answering",
"Scene Understanding",
"Visual Question Answering"
] | 2025-06-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/picosam2-low-latency-segmentation-in-sensor
|
2506.18807
| null | null |
PicoSAM2: Low-Latency Segmentation In-Sensor for Edge Vision Applications
|
Real-time, on-device segmentation is critical for latency-sensitive and privacy-aware applications like smart glasses and IoT devices. We introduce PicoSAM2, a lightweight (1.3M parameters, 336M MACs) promptable segmentation model optimized for edge and in-sensor execution, including the Sony IMX500. It builds on a depthwise separable U-Net, with knowledge distillation and fixed-point prompt encoding to learn from the Segment Anything Model 2 (SAM2). On COCO and LVIS, it achieves 51.9% and 44.9% mIoU, respectively. The quantized model (1.22MB) runs at 14.3 ms on the IMX500-achieving 86 MACs/cycle, making it the only model meeting both memory and compute constraints for in-sensor deployment. Distillation boosts LVIS performance by +3.5% mIoU and +5.1% mAP. These results demonstrate that efficient, promptable segmentation is feasible directly on-camera, enabling privacy-preserving vision without cloud or host processing.
| null |
https://arxiv.org/abs/2506.18807v2
|
https://arxiv.org/pdf/2506.18807v2.pdf
| null |
[
"Pietro Bonazzi",
"Nicola Farronato",
"Stefan Zihlmann",
"Haotong Qin",
"Michele Magno"
] |
[
"Knowledge Distillation",
"Privacy Preserving",
"Segmentation"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/",
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
}
] |
https://paperswithcode.com/paper/medseg-r-medical-image-segmentation-with
|
2506.18669
| null | null |
MedSeg-R: Medical Image Segmentation with Clinical Reasoning
|
Medical image segmentation is challenging due to overlapping anatomies with ambiguous boundaries and a severe imbalance between the foreground and background classes, which particularly affects the delineation of small lesions. Existing methods, including encoder-decoder networks and prompt-driven variants of the Segment Anything Model (SAM), rely heavily on local cues or user prompts and lack integrated semantic priors, thus failing to generalize well to low-contrast or overlapping targets. To address these issues, we propose MedSeg-R, a lightweight, dual-stage framework inspired by inspired by clinical reasoning. Its cognitive stage interprets medical report into structured semantic priors (location, texture, shape), which are fused via transformer block. In the perceptual stage, these priors modulate the SAM backbone: spatial attention highlights likely lesion regions, dynamic convolution adapts feature filters to expected textures, and deformable sampling refines spatial support. By embedding this fine-grained guidance early, MedSeg-R disentangles inter-class confusion and amplifies minority-class cues, greatly improving sensitivity to small lesions. In challenging benchmarks, MedSeg-R produces large Dice improvements in overlapping and ambiguous structures, demonstrating plug-and-play compatibility with SAM-based systems.
| null |
https://arxiv.org/abs/2506.18669v1
|
https://arxiv.org/pdf/2506.18669v1.pdf
| null |
[
"Hao Shao",
"Qibin Hou"
] |
[
"Decoder",
"Image Segmentation",
"Medical Image Segmentation",
"Semantic Segmentation"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/segment-anything-for-satellite-imagery-a
|
2506.16318
| null | null |
Segment Anything for Satellite Imagery: A Strong Baseline and a Regional Dataset for Automatic Field Delineation
|
Accurate mapping of agricultural field boundaries is essential for the efficient operation of agriculture. Automatic extraction from high-resolution satellite imagery, supported by computer vision techniques, can avoid costly ground surveys. In this paper, we present a pipeline for field delineation based on the Segment Anything Model (SAM), introducing a fine-tuning strategy to adapt SAM to this task. In addition to using published datasets, we describe a method for acquiring a complementary regional dataset that covers areas beyond current sources. Extensive experiments assess segmentation accuracy and evaluate the generalization capabilities. Our approach provides a robust baseline for automated field delineation. The new regional dataset, known as ERAS, is now publicly available.
| null |
https://arxiv.org/abs/2506.16318v2
|
https://arxiv.org/pdf/2506.16318v2.pdf
| null |
[
"Carmelo Scribano",
"Elena Govi",
"Paolo Bertellini",
"Simone Parisi",
"Giorgia Franchini",
"Marko Bertogna"
] |
[] | 2025-06-19T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Segment Anything Model",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Segmentation Models",
"parent": null
},
"name": "SAM",
"source_title": "Segment Anything",
"source_url": "https://arxiv.org/abs/2304.02643v1"
}
] |
https://paperswithcode.com/paper/baltimore-atlas-freqweaver-adapter-for-semi
|
2506.15565
| null | null |
Baltimore Atlas: FreqWeaver Adapter for Semi-supervised Ultra-high Spatial Resolution Land Cover Classification
|
Ultra-high Spatial Resolution Land Cover Classification is essential for fine-grained land cover analysis, yet it remains challenging due to the high cost of pixel-level annotations, significant scale variation, and the limited adaptability of large-scale vision models. Existing methods typically focus on 1-meter spatial resolution imagery and rely heavily on annotated data, whereas practical applications often require processing higher-resolution imagery under weak supervision. To address this, we propose a parameter-efficient semi-supervised segmentation framework for 0.3 m spatial resolution imagery, which leverages the knowledge of SAM2 and introduces a remote sensing-specific FreqWeaver Adapter to enhance fine-grained detail modeling while maintaining a lightweight design at only 5.96% of the total model parameters. By effectively leveraging unlabeled data and maintaining minimal parameter overhead, the proposed method delivers robust segmentation results with superior structural consistency, achieving a 1.78% improvement over existing parameter-efficient tuning strategies and a 3.44% gain compared to state-of-the-art high-resolution remote sensing segmentation approaches.
| null |
https://arxiv.org/abs/2506.15565v1
|
https://arxiv.org/pdf/2506.15565v1.pdf
| null |
[
"Junhao Wu",
"Aboagye-Ntow Stephen",
"Chuyuan Wang",
"Gang Chen",
"Xin Huang"
] |
[
"Land Cover Classification",
"Segmentation"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Adapter",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Adapter",
"source_title": "Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing",
"source_url": "https://arxiv.org/abs/2101.03289v5"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/data-synthesis-with-diverse-styles-for-face-1
|
2504.00430
| null | null |
Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion
|
Identity-preserving face synthesis aims to generate synthetic face images of virtual subjects that can substitute real-world data for training face recognition models. While prior arts strive to create images with consistent identities and diverse styles, they face a trade-off between them. Identifying their limitation of treating style variation as subject-agnostic and observing that real-world persons actually have distinct, subject-specific styles, this paper introduces MorphFace, a diffusion-based face generator. The generator learns fine-grained facial styles, e.g., shape, pose and expression, from the renderings of a 3D morphable model (3DMM). It also learns identities from an off-the-shelf recognition model. To create virtual faces, the generator is conditioned on novel identities of unlabeled synthetic faces, and novel styles that are statistically sampled from a real-world prior distribution. The sampling especially accounts for both intra-subject variation and subject distinctiveness. A context blending strategy is employed to enhance the generator's responsiveness to identity and style conditions. Extensive experiments show that MorphFace outperforms the best prior arts in face recognition efficacy.
| null |
https://arxiv.org/abs/2504.00430v1
|
https://arxiv.org/pdf/2504.00430v1.pdf
|
CVPR 2025 1
|
[
"Yuxi Mi",
"Zhizhou Zhong",
"Yuge Huang",
"Qiuyang Yuan",
"Xuan Zhao",
"Jianqing Xu",
"Shouhong Ding",
"Shaoming Wang",
"rizen guo",
"Shuigeng Zhou"
] |
[
"Face Generation",
"Face Recognition"
] | 2025-04-01T00:00:00 |
http://openaccess.thecvf.com//content/CVPR2025/html/Mi_Data_Synthesis_with_Diverse_Styles_for_Face_Recognition_via_3DMM-Guided_CVPR_2025_paper.html
|
http://openaccess.thecvf.com//content/CVPR2025/papers/Mi_Data_Synthesis_with_Diverse_Styles_for_Face_Recognition_via_3DMM-Guided_CVPR_2025_paper.pdf
|
data-synthesis-with-diverse-styles-for-face
| null |
[] |
https://paperswithcode.com/paper/mitigating-confounding-in-speech-based
|
2506.05610
| null | null |
Mitigating Confounding in Speech-Based Dementia Detection through Weight Masking
|
Deep transformer models have been used to detect linguistic anomalies in patient transcripts for early Alzheimer's disease (AD) screening. While pre-trained neural language models (LMs) fine-tuned on AD transcripts perform well, little research has explored the effects of the gender of the speakers represented by these transcripts. This work addresses gender confounding in dementia detection and proposes two methods: the $\textit{Extended Confounding Filter}$ and the $\textit{Dual Filter}$, which isolate and ablate weights associated with gender. We evaluate these methods on dementia datasets with first-person narratives from patients with cognitive impairment and healthy controls. Our results show transformer models tend to overfit to training data distributions. Disrupting gender-related weights results in a deconfounded dementia classifier, with the trade-off of slightly reduced dementia detection performance.
| null |
https://arxiv.org/abs/2506.05610v1
|
https://arxiv.org/pdf/2506.05610v1.pdf
| null |
[
"Zhecheng Sheng",
"Xiruo Ding",
"Brian Hur",
"Changye Li",
"Trevor Cohen",
"Serguei Pakhomov"
] |
[] | 2025-06-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/delta-knn-improving-demonstration-selection
|
2506.03476
| null | null |
Delta-KNN: Improving Demonstration Selection in In-Context Learning for Alzheimer's Disease Detection
|
Alzheimer's Disease (AD) is a progressive neurodegenerative disorder that leads to dementia, and early intervention can greatly benefit from analyzing linguistic abnormalities. In this work, we explore the potential of Large Language Models (LLMs) as health assistants for AD diagnosis from patient-generated text using in-context learning (ICL), where tasks are defined through a few input-output examples. Empirical results reveal that conventional ICL methods, such as similarity-based selection, perform poorly for AD diagnosis, likely due to the inherent complexity of this task. To address this, we introduce Delta-KNN, a novel demonstration selection strategy that enhances ICL performance. Our method leverages a delta score to assess the relative gains of each training example, coupled with a KNN-based retriever that dynamically selects optimal "representatives" for a given input. Experiments on two AD detection datasets across three open-source LLMs demonstrate that Delta-KNN consistently outperforms existing ICL baselines. Notably, when using the Llama-3.1 model, our approach achieves new state-of-the-art results, surpassing even supervised classifiers.
| null |
https://arxiv.org/abs/2506.03476v1
|
https://arxiv.org/pdf/2506.03476v1.pdf
| null |
[
"Chuyuan Li",
"Raymond Li",
"Thalia S. Field",
"Giuseppe Carenini"
] |
[
"Alzheimer's Disease Detection",
"In-Context Learning"
] | 2025-06-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/alzheimer-s-dementia-detection-using
|
2506.09315
| null | null |
Alzheimer's Dementia Detection Using Perplexity from Paired Large Language Models
|
Alzheimer's dementia (AD) is a neurodegenerative disorder with cognitive decline that commonly impacts language ability. This work extends the paired perplexity approach to detecting AD by using a recent large language model (LLM), the instruction-following version of Mistral-7B. We improve accuracy by an average of 3.33% over the best current paired perplexity method and by 6.35% over the top-ranked method from the ADReSS 2020 challenge benchmark. Our further analysis demonstrates that the proposed approach can effectively detect AD with a clear and interpretable decision boundary in contrast to other methods that suffer from opaque decision-making processes. Finally, by prompting the fine-tuned LLMs and comparing the model-generated responses to human responses, we illustrate that the LLMs have learned the special language patterns of AD speakers, which opens up possibilities for novel methods of model interpretation and data augmentation.
| null |
https://arxiv.org/abs/2506.09315v1
|
https://arxiv.org/pdf/2506.09315v1.pdf
| null |
[
"Yao Xiao",
"Heidi Christensen",
"Stefan Goetze"
] |
[
"Data Augmentation",
"Decision Making",
"Instruction Following",
"Language Modeling",
"Language Modelling",
"Large Language Model"
] | 2025-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/probing-deep-into-temporal-profile-makes-the
|
2506.12766
| null | null |
Probing Deep into Temporal Profile Makes the Infrared Small Target Detector Much Better
|
Infrared small target (IRST) detection is challenging in simultaneously achieving precise, universal, robust and efficient performance due to extremely dim targets and strong interference. Current learning-based methods attempt to leverage ``more" information from both the spatial and the short-term temporal domains, but suffer from unreliable performance under complex conditions while incurring computational redundancy. In this paper, we explore the ``more essential" information from a more crucial domain for the detection. Through theoretical analysis, we reveal that the global temporal saliency and correlation information in the temporal profile demonstrate significant superiority in distinguishing target signals from other signals. To investigate whether such superiority is preferentially leveraged by well-trained networks, we built the first prediction attribution tool in this field and verified the importance of the temporal profile information. Inspired by the above conclusions, we remodel the IRST detection task as a one-dimensional signal anomaly detection task, and propose an efficient deep temporal probe network (DeepPro) that only performs calculations in the time dimension for IRST detection. We conducted extensive experiments to fully validate the effectiveness of our method. The experimental results are exciting, as our DeepPro outperforms existing state-of-the-art IRST detection methods on widely-used benchmarks with extremely high efficiency, and achieves a significant improvement on dim targets and in complex scenarios. We provide a new modeling domain, a new insight, a new method, and a new performance, which can promote the development of IRST detection. Codes are available at https://github.com/TinaLRJ/DeepPro.
|
Infrared small target (IRST) detection is challenging in simultaneously achieving precise, universal, robust and efficient performance due to extremely dim targets and strong interference.
|
https://arxiv.org/abs/2506.12766v1
|
https://arxiv.org/pdf/2506.12766v1.pdf
| null |
[
"Ruojing Li",
"Wei An",
"Xinyi Ying",
"Yingqian Wang",
"Yimian Dai",
"Longguang Wang",
"Miao Li",
"Yulan Guo",
"Li Liu"
] |
[
"Anomaly Detection"
] | 2025-06-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hoiverse-a-synthetic-scene-graph-dataset-with
|
2506.19639
| null | null |
HOIverse: A Synthetic Scene Graph Dataset With Human Object Interactions
|
When humans and robotic agents coexist in an environment, scene understanding becomes crucial for the agents to carry out various downstream tasks like navigation and planning. Hence, an agent must be capable of localizing and identifying actions performed by the human. Current research lacks reliable datasets for performing scene understanding within indoor environments where humans are also a part of the scene. Scene Graphs enable us to generate a structured representation of a scene or an image to perform visual scene understanding. To tackle this, we present HOIverse a synthetic dataset at the intersection of scene graph and human-object interaction, consisting of accurate and dense relationship ground truths between humans and surrounding objects along with corresponding RGB images, segmentation masks, depth images and human keypoints. We compute parametric relations between various pairs of objects and human-object pairs, resulting in an accurate and unambiguous relation definitions. In addition, we benchmark our dataset on state-of-the-art scene graph generation models to predict parametric relations and human-object interactions. Through this dataset, we aim to accelerate research in the field of scene understanding involving people.
| null |
https://arxiv.org/abs/2506.19639v1
|
https://arxiv.org/pdf/2506.19639v1.pdf
| null |
[
"Mrunmai Vivek Phatak",
"Julian Lorenz",
"Nico Hörmann",
"Jörg Hähner",
"Rainer Lienhart"
] |
[
"Graph Generation",
"Human-Object Interaction Detection",
"Scene Graph Generation",
"Scene Understanding"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/including-semantic-information-via-word
|
2506.18721
| null | null |
Including Semantic Information via Word Embeddings for Skeleton-based Action Recognition
|
Effective human action recognition is widely used for cobots in Industry 4.0 to assist in assembly tasks. However, conventional skeleton-based methods often lose keypoint semantics, limiting their effectiveness in complex interactions. In this work, we introduce a novel approach to skeleton-based action recognition that enriches input representations by leveraging word embeddings to encode semantic information. Our method replaces one-hot encodings with semantic volumes, enabling the model to capture meaningful relationships between joints and objects. Through extensive experiments on multiple assembly datasets, we demonstrate that our approach significantly improves classification performance, and enhances generalization capabilities by simultaneously supporting different skeleton types and object classes. Our findings highlight the potential of incorporating semantic information to enhance skeleton-based action recognition in dynamic and diverse environments.
| null |
https://arxiv.org/abs/2506.18721v1
|
https://arxiv.org/pdf/2506.18721v1.pdf
| null |
[
"Dustin Aganian",
"Erik Franze",
"Markus Eisenbach",
"Horst-Michael Gross"
] |
[
"Action Recognition",
"Skeleton Based Action Recognition",
"Temporal Action Localization",
"Word Embeddings"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sequential-keypoint-density-estimator-an
|
2506.18368
| null | null |
Sequential keypoint density estimator: an overlooked baseline of skeleton-based video anomaly detection
|
Detecting anomalous human behaviour is an important visual task in safety-critical applications such as healthcare monitoring, workplace safety, or public surveillance. In these contexts, abnormalities are often reflected with unusual human poses. Thus, we propose SeeKer, a method for detecting anomalies in sequences of human skeletons. Our method formulates the skeleton sequence density through autoregressive factorization at the keypoint level. The corresponding conditional distributions represent probable keypoint locations given prior skeletal motion. We formulate the joint distribution of the considered skeleton as causal prediction of conditional Gaussians across its constituent keypoints. A skeleton is flagged as anomalous if its keypoint locations surprise our model (i.e. receive a low density). In practice, our anomaly score is a weighted sum of per-keypoint log-conditionals, where the weights account for the confidence of the underlying keypoint detector. Despite its conceptual simplicity, SeeKer surpasses all previous methods on the UBnormal and MSAD-HR datasets while delivering competitive performance on the ShanghaiTech dataset.
|
In practice, our anomaly score is a weighted sum of per-keypoint log-conditionals, where the weights account for the confidence of the underlying keypoint detector.
|
https://arxiv.org/abs/2506.18368v1
|
https://arxiv.org/pdf/2506.18368v1.pdf
| null |
[
"Anja Delić",
"Matej Grcić",
"Siniša Šegvić"
] |
[
"Anomaly Detection",
"Video Anomaly Detection"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fast-neural-inverse-kinematics-on-human-body
|
2506.17996
| null | null |
Fast Neural Inverse Kinematics on Human Body Motions
|
Markerless motion capture enables the tracking of human motion without requiring physical markers or suits, offering increased flexibility and reduced costs compared to traditional systems. However, these advantages often come at the expense of higher computational demands and slower inference, limiting their applicability in real-time scenarios. In this technical report, we present a fast and reliable neural inverse kinematics framework designed for real-time capture of human body motions from 3D keypoints. We describe the network architecture, training methodology, and inference procedure in detail. Our framework is evaluated both qualitatively and quantitatively, and we support key design decisions through ablation studies.
| null |
https://arxiv.org/abs/2506.17996v1
|
https://arxiv.org/pdf/2506.17996v1.pdf
| null |
[
"David Tolpin",
"Sefy Kagarlitsky"
] |
[
"Markerless Motion Capture"
] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/an-entropy-optimal-path-to-humble-ai
|
2506.17940
| null | null |
An entropy-optimal path to humble AI
|
Progress of AI has led to a creation of very successful, but by no means humble models and tools, especially regarding (i) the huge and further exploding costs and resources they demand, and (ii) the over-confidence of these tools with the answers they provide. Here we introduce a novel mathematical framework for a non-equilibrium entropy-optimizing reformulation of Boltzmann machines based on the exact law of total probability. It results in the highly-performant, but much cheaper, gradient-descent-free learning framework with mathematically-justified existence and uniqueness criteria, and answer confidence/reliability measures. Comparisons to state-of-the-art AI tools in terms of performance, cost and the model descriptor lengths on a set of synthetic problems with varying complexity reveal that the proposed method results in more performant and slim models, with the descriptor lengths being very close to the intrinsic complexity scaling bounds for the underlying problems. Applying this framework to historical climate data results in models with systematically higher prediction skills for the onsets of La Ni\~na and El Ni\~no climate phenomena, requiring just few years of climate data for training - a small fraction of what is necessary for contemporary climate prediction tools.
| null |
https://arxiv.org/abs/2506.17940v1
|
https://arxiv.org/pdf/2506.17940v1.pdf
| null |
[
"Davide Bassetti",
"Lukáš Pospíšil",
"Michael Groom",
"Terence J. O'Kane",
"Illia Horenko"
] |
[] | 2025-06-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/fetuses-made-simple-modeling-and-tracking-of
|
2506.17858
| null | null |
Fetuses Made Simple: Modeling and Tracking of Fetal Shape and Pose
|
Analyzing fetal body motion and shape is paramount in prenatal diagnostics and monitoring. Existing methods for fetal MRI analysis mainly rely on anatomical keypoints or volumetric body segmentations. Keypoints simplify body structure to facilitate motion analysis, but may ignore important details of full-body shape. Body segmentations capture complete shape information but complicate temporal analysis due to large non-local fetal movements. To address these limitations, we construct a 3D articulated statistical fetal body model based on the Skinned Multi-Person Linear Model (SMPL). Our algorithm iteratively estimates body pose in the image space and body shape in the canonical pose space. This approach improves robustness to MRI motion artifacts and intensity distortions, and reduces the impact of incomplete surface observations due to challenging fetal poses. We train our model on segmentations and keypoints derived from $19,816$ MRI volumes across $53$ subjects. Our model captures body shape and motion across time series and provides intuitive visualization. Furthermore, it enables automated anthropometric measurements traditionally difficult to obtain from segmentations and keypoints. When tested on unseen fetal body shapes, our method yields a surface alignment error of $3.2$ mm for $3$ mm MRI voxel size. To our knowledge, this represents the first 3D articulated statistical fetal body model, paving the way for enhanced fetal motion and shape analysis in prenatal diagnostics. The code is available at https://github.com/MedicalVisionGroup/fetal-smpl .
|
We train our model on segmentations and keypoints derived from $19, 816$ MRI volumes across $53$ subjects.
|
https://arxiv.org/abs/2506.17858v1
|
https://arxiv.org/pdf/2506.17858v1.pdf
| null |
[
"Yingcheng Liu",
"Peiqi Wang",
"Sebastian Diaz",
"Esra Abaci Turk",
"Benjamin Billot",
"Patricia Ellen Grant",
"Polina Golland"
] |
[] | 2025-06-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/how-far-can-off-the-shelf-multimodal-large
|
2506.16450
| null | null |
How Far Can Off-the-Shelf Multimodal Large Language Models Go in Online Episodic Memory Question Answering?
|
We investigate whether off-the-shelf Multimodal Large Language Models (MLLMs) can tackle Online Episodic-Memory Video Question Answering (OEM-VQA) without additional training. Our pipeline converts a streaming egocentric video into a lightweight textual memory, only a few kilobytes per minute, via an MLLM descriptor module, and answers multiple-choice questions by querying this memory with an LLM reasoner module. On the QAEgo4D-Closed benchmark, our best configuration attains 56.0% accuracy with 3.6 kB per minute storage, matching the performance of dedicated state-of-the-art systems while being 10**4/10**5 times more memory-efficient. Extensive ablations provides insights into the role of each component and design choice, and highlight directions of improvement for future research.
| null |
https://arxiv.org/abs/2506.16450v1
|
https://arxiv.org/pdf/2506.16450v1.pdf
| null |
[
"Giuseppe Lando",
"Rosario Forte",
"Giovanni Maria Farinella",
"Antonino Furnari"
] |
[
"Multiple-choice",
"Question Answering",
"Video Question Answering",
"Visual Question Answering (VQA)"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/star-pose-efficient-low-resolution-video
|
2506.16061
| null | null |
STAR-Pose: Efficient Low-Resolution Video Human Pose Estimation via Spatial-Temporal Adaptive Super-Resolution
|
Human pose estimation in low-resolution videos presents a fundamental challenge in computer vision. Conventional methods either assume high-quality inputs or employ computationally expensive cascaded processing, which limits their deployment in resource-constrained environments. We propose STAR-Pose, a spatial-temporal adaptive super-resolution framework specifically designed for video-based human pose estimation. Our method features a novel spatial-temporal Transformer with LeakyReLU-modified linear attention, which efficiently captures long-range temporal dependencies. Moreover, it is complemented by an adaptive fusion module that integrates parallel CNN branch for local texture enhancement. We also design a pose-aware compound loss to achieve task-oriented super-resolution. This loss guides the network to reconstruct structural features that are most beneficial for keypoint localization, rather than optimizing purely for visual quality. Extensive experiments on several mainstream video HPE datasets demonstrate that STAR-Pose outperforms existing approaches. It achieves up to 5.2% mAP improvement under extremely low-resolution (64x48) conditions while delivering 2.8x to 4.4x faster inference than cascaded approaches.
| null |
https://arxiv.org/abs/2506.16061v1
|
https://arxiv.org/pdf/2506.16061v1.pdf
| null |
[
"Yucheng Jin",
"Jinyan Chen",
"Ziyue He",
"Baojun Han",
"Furan An"
] |
[
"Pose Estimation",
"Super-Resolution"
] | 2025-06-19T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/behavioral-anomaly-detection-in-distributed
|
2506.19246
| null | null |
Behavioral Anomaly Detection in Distributed Systems via Federated Contrastive Learning
|
This paper addresses the increasingly prominent problem of anomaly detection in distributed systems. It proposes a detection method based on federated contrastive learning. The goal is to overcome the limitations of traditional centralized approaches in terms of data privacy, node heterogeneity, and anomaly pattern recognition. The proposed method combines the distributed collaborative modeling capabilities of federated learning with the feature discrimination enhancement of contrastive learning. It builds embedding representations on local nodes and constructs positive and negative sample pairs to guide the model in learning a more discriminative feature space. Without exposing raw data, the method optimizes a global model through a federated aggregation strategy. Specifically, the method uses an encoder to represent local behavior data in high-dimensional space. This includes system logs, operational metrics, and system calls. The model is trained using both contrastive loss and classification loss to improve its ability to detect fine-grained anomaly patterns. The method is evaluated under multiple typical attack types. It is also tested in a simulated real-time data stream scenario to examine its responsiveness. Experimental results show that the proposed method outperforms existing approaches across multiple performance metrics. It demonstrates strong detection accuracy and adaptability, effectively addressing complex anomalies in distributed environments. Through careful design of key modules and optimization of the training mechanism, the proposed method achieves a balance between privacy preservation and detection performance. It offers a feasible technical path for intelligent security management in distributed systems.
| null |
https://arxiv.org/abs/2506.19246v1
|
https://arxiv.org/pdf/2506.19246v1.pdf
| null |
[
"Renzi Meng",
"Heyi Wang",
"Yumeng Sun",
"Qiyuan Wu",
"Lian Lian",
"Renhan Zhang"
] |
[
"Anomaly Detection",
"Contrastive Learning",
"Federated Learning"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-task-belief-similarity-with-latent
|
2506.19785
| null | null |
Learning Task Belief Similarity with Latent Dynamics for Meta-Reinforcement Learning
|
Meta-reinforcement learning requires utilizing prior task distribution information obtained during exploration to rapidly adapt to unknown tasks. The efficiency of an agent's exploration hinges on accurately identifying the current task. Recent Bayes-Adaptive Deep RL approaches often rely on reconstructing the environment's reward signal, which is challenging in sparse reward settings, leading to suboptimal exploitation. Inspired by bisimulation metrics, which robustly extracts behavioral similarity in continuous MDPs, we propose SimBelief-a novel meta-RL framework via measuring similarity of task belief in Bayes-Adaptive MDP (BAMDP). SimBelief effectively extracts common features of similar task distributions, enabling efficient task identification and exploration in sparse reward environments. We introduce latent task belief metric to learn the common structure of similar tasks and incorporate it into the specific task belief. By learning the latent dynamics across task distributions, we connect shared latent task belief features with specific task features, facilitating rapid task identification and adaptation. Our method outperforms state-of-the-art baselines on sparse reward MuJoCo and panda-gym tasks.
|
Meta-reinforcement learning requires utilizing prior task distribution information obtained during exploration to rapidly adapt to unknown tasks.
|
https://arxiv.org/abs/2506.19785v1
|
https://arxiv.org/pdf/2506.19785v1.pdf
| null |
[
"Menglong Zhang",
"Fuyuan Qian"
] |
[
"Meta Reinforcement Learning",
"MuJoCo"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tailored-conversations-beyond-llms-a-rl-based
|
2506.19652
| null | null |
Tailored Conversations beyond LLMs: A RL-Based Dialogue Manager
|
In this work, we propose a novel framework that integrates large language models (LLMs) with an RL-based dialogue manager for open-ended dialogue with a specific goal. By leveraging hierarchical reinforcement learning to model the structured phases of dialogue and employ meta-learning to enhance adaptability across diverse user profiles, our approach enhances adaptability and efficiency, enabling the system to learn from limited data, transition fluidly between dialogue phases, and personalize responses to heterogeneous patient needs. We apply our framework to Motivational Interviews, aiming to foster behavior change, and demonstrate that the proposed dialogue manager outperforms a state-of-the-art LLM baseline in terms of reward, showing a potential benefit of conditioning LLMs to create open-ended dialogue systems with specific goals.
| null |
https://arxiv.org/abs/2506.19652v1
|
https://arxiv.org/pdf/2506.19652v1.pdf
| null |
[
"Lucie Galland",
"Catherine Pelachaud",
"Florian Pecune"
] |
[
"Hierarchical Reinforcement Learning",
"Meta-Learning"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/has-machine-translation-evaluation-achieved
|
2506.19571
| null | null |
Has Machine Translation Evaluation Achieved Human Parity? The Human Reference and the Limits of Progress
|
In Machine Translation (MT) evaluation, metric performance is assessed based on agreement with human judgments. In recent years, automatic metrics have demonstrated increasingly high levels of agreement with humans. To gain a clearer understanding of metric performance and establish an upper bound, we incorporate human baselines in the MT meta-evaluation, that is, the assessment of MT metrics' capabilities. Our results show that human annotators are not consistently superior to automatic metrics, with state-of-the-art metrics often ranking on par with or higher than human baselines. Despite these findings suggesting human parity, we discuss several reasons for caution. Finally, we explore the broader implications of our results for the research field, asking: Can we still reliably measure improvements in MT evaluation? With this work, we aim to shed light on the limits of our ability to measure progress in the field, fostering discussion on an issue that we believe is crucial to the entire MT evaluation community.
|
To gain a clearer understanding of metric performance and establish an upper bound, we incorporate human baselines in the MT meta-evaluation, that is, the assessment of MT metrics' capabilities.
|
https://arxiv.org/abs/2506.19571v1
|
https://arxiv.org/pdf/2506.19571v1.pdf
| null |
[
"Lorenzo Proietti",
"Stefano Perrella",
"Roberto Navigli"
] |
[
"Machine Translation"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/faf-a-feature-adaptive-framework-for-few-shot
|
2506.19567
| null | null |
FAF: A Feature-Adaptive Framework for Few-Shot Time Series Forecasting
|
Multi-task and few-shot time series forecasting tasks are commonly encountered in scenarios such as the launch of new products in different cities. However, traditional time series forecasting methods suffer from insufficient historical data, which stems from a disregard for the generalized and specific features among different tasks. For the aforementioned challenges, we propose the Feature-Adaptive Time Series Forecasting Framework (FAF), which consists of three key components: the Generalized Knowledge Module (GKM), the Task-Specific Module (TSM), and the Rank Module (RM). During training phase, the GKM is updated through a meta-learning mechanism that enables the model to extract generalized features across related tasks. Meanwhile, the TSM is trained to capture diverse local dynamics through multiple functional regions, each of which learns specific features from individual tasks. During testing phase, the RM dynamically selects the most relevant functional region from the TSM based on input sequence features, which is then combined with the generalized knowledge learned by the GKM to generate accurate forecasts. This design enables FAF to achieve robust and personalized forecasting even with sparse historical observations We evaluate FAF on five diverse real-world datasets under few-shot time series forecasting settings. Experimental results demonstrate that FAF consistently outperforms baselines that include three categories of time series forecasting methods. In particular, FAF achieves a 41.81\% improvement over the best baseline, iTransformer, on the CO$_2$ emissions dataset.
| null |
https://arxiv.org/abs/2506.19567v1
|
https://arxiv.org/pdf/2506.19567v1.pdf
| null |
[
"Pengpeng Ouyang",
"Dong Chen",
"Tong Yang",
"Shuo Feng",
"Zhao Jin",
"Mingliang Xu"
] |
[
"Meta-Learning",
"Time Series",
"Time Series Forecasting"
] | 2025-06-24T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/overtuning-in-hyperparameter-optimization
|
2506.19540
| null | null |
Overtuning in Hyperparameter Optimization
|
Hyperparameter optimization (HPO) aims to identify an optimal hyperparameter configuration (HPC) such that the resulting model generalizes well to unseen data. As the expected generalization error cannot be optimized directly, it is estimated with a resampling strategy, such as holdout or cross-validation. This approach implicitly assumes that minimizing the validation error leads to improved generalization. However, since validation error estimates are inherently stochastic and depend on the resampling strategy, a natural question arises: Can excessive optimization of the validation error lead to overfitting at the HPO level, akin to overfitting in model training based on empirical risk minimization? In this paper, we investigate this phenomenon, which we term overtuning, a form of overfitting specific to HPO. Despite its practical relevance, overtuning has received limited attention in the HPO and AutoML literature. We provide a formal definition of overtuning and distinguish it from related concepts such as meta-overfitting. We then conduct a large-scale reanalysis of HPO benchmark data to assess the prevalence and severity of overtuning. Our results show that overtuning is more common than previously assumed, typically mild but occasionally severe. In approximately 10% of cases, overtuning leads to the selection of a seemingly optimal HPC with worse generalization error than the default or first configuration tried. We further analyze how factors such as performance metric, resampling strategy, dataset size, learning algorithm, and HPO method affect overtuning and discuss mitigation strategies. Our results highlight the need to raise awareness of overtuning, particularly in the small-data regime, indicating that further mitigation strategies should be studied.
|
This approach implicitly assumes that minimizing the validation error leads to improved generalization.
|
https://arxiv.org/abs/2506.19540v1
|
https://arxiv.org/pdf/2506.19540v1.pdf
| null |
[
"Lennart Schneider",
"Bernd Bischl",
"Matthias Feurer"
] |
[
"AutoML",
"Hyperparameter Optimization"
] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "In machine learning, a hyperparameter is a parameter whose value is used to control learning process, and HPO is the problem of choosing a set of optimal hyperparameters for a learning algorithm.",
"full_name": "Hyper-parameter optimization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**AutoML** methods are used to automatically solve machine learning tasks without needing the user to specify or experiment with architectures, hyperparameters and other settings. Below you can find a continuously updating list of AutoML methods.",
"name": "AutoML",
"parent": null
},
"name": "HPO",
"source_title": "Algorithms for Hyper-Parameter Optimization",
"source_url": "http://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization"
}
] |
https://paperswithcode.com/paper/kunlunbaizerag-reinforcement-learning-driven
|
2506.19466
| null | null |
KunLunBaizeRAG: Reinforcement Learning Driven Inference Performance Leap for Large Language Models
|
This paper introduces KunLunBaizeRAG, a reinforcement learning-driven reasoning framework designed to enhance the reasoning capabilities of large language models (LLMs) in complex multi-hop question-answering tasks. The framework addresses key limitations of traditional RAG, such as retrieval drift, information redundancy, and strategy rigidity. Key innovations include the RAG-driven Reasoning Alignment (RDRA) mechanism, the Search-Think Iterative Enhancement (STIE) mechanism, the Network-Local Intelligent Routing (NLR) mechanism, and a progressive hybrid training strategy. Experimental results demonstrate significant improvements in exact match (EM) and LLM-judged score (LJ) across four benchmarks, highlighting the framework's robustness and effectiveness in complex reasoning scenarios.
| null |
https://arxiv.org/abs/2506.19466v1
|
https://arxiv.org/pdf/2506.19466v1.pdf
| null |
[
"Cheng Li",
"Jiexiong Liu",
"Yixuan Chen",
"Qihang Zhou",
"KunLun Meta"
] |
[
"Multi-hop Question Answering",
"Question Answering",
"RAG",
"Retrieval"
] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "**Linear Warmup With Linear Decay** is a learning rate schedule in which we increase the learning rate linearly for $n$ updates and then linearly decay afterwards.",
"full_name": "Linear Warmup With Linear Decay",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Linear Decay",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/bert",
"description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.",
"full_name": "BERT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "BERT",
"source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"source_url": "https://arxiv.org/abs/1810.04805v2"
},
{
"code_snippet_url": null,
"description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).",
"full_name": "BART",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "BART",
"source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"source_url": "https://arxiv.org/abs/1910.13461v1"
},
{
"code_snippet_url": "",
"description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.",
"full_name": "RAG",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "RAG",
"source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks",
"source_url": "https://arxiv.org/abs/2005.11401v4"
}
] |
https://paperswithcode.com/paper/a-comment-on-the-illusion-of-thinking
|
2506.18957
| null | null |
A Comment On "The Illusion of Thinking": Reframing the Reasoning Cliff as an Agentic Gap
|
The recent work by Shojaee et al. (2025), titled The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity, presents a compelling empirical finding, a reasoning cliff, where the performance of Large Reasoning Models (LRMs) collapses beyond a specific complexity threshold, which the authors posit as an intrinsic scaling limitation of Chain-of-Thought (CoT) reasoning. This commentary, while acknowledging the study's methodological rigor, contends that this conclusion is confounded by experimental artifacts. We argue that the observed failure is not evidence of a fundamental cognitive boundary, but rather a predictable outcome of system-level constraints in the static, text-only evaluation paradigm, including tool use restrictions, context window recall issues, the absence of crucial cognitive baselines, inadequate statistical reporting, and output generation limits. We reframe this performance collapse through the lens of an agentic gap, asserting that the models are not failing at reasoning, but at execution within a profoundly restrictive interface. We empirically substantiate this critique by demonstrating a striking reversal. A model, initially declaring a puzzle impossible when confined to text-only generation, now employs agentic tools to not only solve it but also master variations of complexity far beyond the reasoning cliff it previously failed to surmount. Additionally, our empirical analysis of tool-enabled models like o4-mini and GPT-4o reveals a hierarchy of agentic reasoning, from simple procedural execution to complex meta-cognitive self-correction, which has significant implications for how we define and measure machine intelligence. The illusion of thinking attributed to LRMs is less a reasoning deficit and more a consequence of an otherwise capable mind lacking the tools for action.
| null |
https://arxiv.org/abs/2506.18957v1
|
https://arxiv.org/pdf/2506.18957v1.pdf
| null |
[
"Sheraz Khan",
"Subha Madhavan",
"Kannan Natarajan"
] |
[] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dip-unsupervised-dense-in-context-post
|
2506.18463
| null | null |
DIP: Unsupervised Dense In-Context Post-training of Visual Representations
|
We introduce DIP, a novel unsupervised post-training method designed to enhance dense image representations in large-scale pretrained vision encoders for in-context scene understanding. Unlike prior approaches that rely on complex self-distillation architectures, our method trains the vision encoder using pseudo-tasks that explicitly simulate downstream in-context scenarios, inspired by meta-learning principles. To enable post-training on unlabeled data, we propose an automatic mechanism for generating in-context tasks that combines a pretrained diffusion model and the vision encoder itself. DIP is simple, unsupervised, and computationally efficient, requiring less than 9 hours on a single A100 GPU. By learning dense representations through pseudo in-context tasks, it achieves strong performance across a wide variety of downstream real-world in-context scene understanding tasks. It outperforms both the initial vision encoder and prior methods, offering a practical and effective solution for improving dense representations. Code available here: https://github.com/sirkosophia/DIP
|
We introduce DIP, a novel unsupervised post-training method designed to enhance dense image representations in large-scale pretrained vision encoders for in-context scene understanding.
|
https://arxiv.org/abs/2506.18463v1
|
https://arxiv.org/pdf/2506.18463v1.pdf
| null |
[
"Sophia Sirko-Galouchenko",
"Spyros Gidaris",
"Antonin Vobecky",
"Andrei Bursuc",
"Nicolas Thome"
] |
[
"GPU",
"Meta-Learning",
"Scene Understanding"
] | 2025-06-23T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/ard-lora-dynamic-rank-allocation-for
|
2506.18267
| null | null |
ARD-LoRA: Dynamic Rank Allocation for Parameter-Efficient Fine-Tuning of Foundation Models with Heterogeneous Adaptation Needs
|
Conventional Low-Rank Adaptation (LoRA) methods employ a fixed rank, imposing uniform adaptation across transformer layers and attention heads despite their heterogeneous learning dynamics. This paper introduces Adaptive Rank Dynamic LoRA (ARD-LoRA), a novel framework that automates rank allocation through learnable scaling factors. These factors are optimized via a meta-objective balancing task performance and parameter efficiency, incorporating $\ell_1$ sparsity for minimal rank and Total Variation regularization for stable rank transitions. ARD-LoRA enables continuous, differentiable, per-head rank adaptation. Experiments on LLAMA-3.1-70B and PaliGemma-2 demonstrate ARD-LoRA's efficacy, achieving up to 99.3% of full fine-tuning performance with only 0.32% trainable parameters, outperforming strong baselines like DoRA and AdaLoRA. Furthermore, it reduces multimodal adaptation memory by 41%. These results establish dynamic, fine-grained rank allocation as a critical paradigm for efficient foundation model adaptation.
| null |
https://arxiv.org/abs/2506.18267v1
|
https://arxiv.org/pdf/2506.18267v1.pdf
| null |
[
"Haseeb Ullah Khan Shinwari",
"Muhammad Usama"
] |
[
"parameter-efficient fine-tuning"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/drive-r1-bridging-reasoning-and-planning-in
|
2506.18234
| null | null |
Drive-R1: Bridging Reasoning and Planning in VLMs for Autonomous Driving with Reinforcement Learning
|
Large vision-language models (VLMs) for autonomous driving (AD) are evolving beyond perception and cognition tasks toward motion planning. However, we identify two critical challenges in this direction: (1) VLMs tend to learn shortcuts by relying heavily on history input information, achieving seemingly strong planning results without genuinely understanding the visual inputs; and (2) the chain-ofthought (COT) reasoning processes are always misaligned with the motion planning outcomes, and how to effectively leverage the complex reasoning capability to enhance planning remains largely underexplored. In this paper, we start from a small-scale domain-specific VLM and propose Drive-R1 designed to bridges the scenario reasoning and motion planning for AD. Drive-R1 first undergoes the supervised finetuning on a elaborate dataset containing both long and short COT data. Drive-R1 is encouraged to reason step-by-step from visual input to final planning decisions. Subsequently, Drive-R1 is trained within a reinforcement learning framework that incentivizes the discovery of reasoning paths that are more informative for planning, guided by rewards based on predicted trajectories and meta actions. Experimental evaluations on the nuScenes and DriveLM-nuScenes benchmarks demonstrate that Drive-R1 achieves superior performance compared to existing state-of-the-art VLMs. We believe that Drive-R1 presents a promising direction for bridging reasoning and planning in AD, offering methodological insights for future research and applications.
| null |
https://arxiv.org/abs/2506.18234v1
|
https://arxiv.org/pdf/2506.18234v1.pdf
| null |
[
"Yue Li",
"Meng Tian",
"Dechang Zhu",
"Jiangtong Zhu",
"Zhenyu Lin",
"Zhiwei Xiong",
"Xinhai Zhao"
] |
[
"Autonomous Driving",
"Motion Planning"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/omnireflect-discovering-transferable
|
2506.17449
| null | null |
OmniReflect: Discovering Transferable Constitutions for LLM agents via Neuro-Symbolic Reflections
|
Efforts to improve Large Language Model (LLM) agent performance on complex tasks have largely focused on fine-tuning and iterative self-correction. However, these approaches often lack generalizable mechanisms for longterm learning and remain inefficient in dynamic environments. We introduce OmniReflect, a hierarchical, reflection-driven framework that constructs a constitution, a compact set of guiding principles distilled from task experiences, to enhance the effectiveness and efficiency of an LLM agent. OmniReflect operates in two modes: Self-sustaining, where a single agent periodically curates its own reflections during task execution, and Co-operative, where a Meta-advisor derives a constitution from a small calibration set to guide another agent. To construct these constitutional principles, we employ Neural, Symbolic, and NeuroSymbolic techniques, offering a balance between contextual adaptability and computational efficiency. Empirical results averaged across models show major improvements in task success, with absolute gains of +10.3% on ALFWorld, +23.8% on BabyAI, and +8.3% on PDDL in the Self-sustaining mode. Similar gains are seen in the Co-operative mode, where a lightweight Qwen3-4B ReAct agent outperforms all Reflexion baselines on BabyAI. These findings highlight the robustness and effectiveness of OmniReflect across environments and backbones.
| null |
https://arxiv.org/abs/2506.17449v1
|
https://arxiv.org/pdf/2506.17449v1.pdf
| null |
[
"Manasa Bharadwaj",
"Nikhil Verma",
"Kevin Ferreira"
] |
[
"Computational Efficiency",
"Large Language Model"
] | 2025-06-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/rocketstack-a-level-aware-deep-recursive
|
2506.16965
| null | null |
RocketStack: A level-aware deep recursive ensemble learning framework with exploratory feature fusion and model pruning dynamics
|
Ensemble learning remains a cornerstone of machine learning, with stacking used to integrate predictions from multiple base learners through a meta-model. However, deep stacking remains rare, as most designs prioritize horizontal diversity over recursive depth due to model complexity, feature redundancy, and computational burden. To address these challenges, RocketStack, a level-aware recursive ensemble framework, is introduced and explored up to ten stacking levels, extending beyond prior architectures. The framework incrementally prunes weaker learners at each level, enabling deeper stacking without excessive complexity. To mitigate early performance saturation, mild Gaussian noise is added to out-of-fold (OOF) scores before pruning, and compared against strict OOF pruning. Further both per-level and periodic feature compressions are explored using attention-based selection, Simple, Fast, Efficient (SFE) filter, and autoencoders. Across 33 datasets (23 binary, 10 multi-class), linear-trend tests confirmed rising accuracy with depth in most variants, and the top performing meta-model at each level increasingly outperformed the strongest standalone ensemble. In the binary subset, periodic SFE with mild OOF-score randomization reached 97.08% at level 10, 5.14% above the strict-pruning configuration and cut runtime by 10.5% relative to no compression. In the multi-class subset, periodic attention selection reached 98.60% at level 10, exceeding the strongest baseline by 6.11%, while reducing runtime by 56.1% and feature dimensionality by 74% compared to no compression. These findings highlight mild randomization as an effective regularizer and periodic compression as a stabilizer. Echoing the design of multistage rockets in aerospace (prune, compress, propel) RocketStack achieves deep recursive ensembling with tractable complexity.
| null |
https://arxiv.org/abs/2506.16965v1
|
https://arxiv.org/pdf/2506.16965v1.pdf
| null |
[
"Çağatay Demirel"
] |
[
"Ensemble Learning"
] | 2025-06-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/self-supervised-feature-extraction-for
|
2506.16821
| null | null |
Self-supervised Feature Extraction for Enhanced Ball Detection on Soccer Robots
|
Robust and accurate ball detection is a critical component for autonomous humanoid soccer robots, particularly in dynamic and challenging environments such as RoboCup outdoor fields. However, traditional supervised approaches require extensive manual annotation, which is costly and time-intensive. To overcome this problem, we present a self-supervised learning framework for domain-adaptive feature extraction to enhance ball detection performance. The proposed approach leverages a general-purpose pretrained model to generate pseudo-labels, which are then used in a suite of self-supervised pretext tasks -- including colorization, edge detection, and triplet loss -- to learn robust visual features without relying on manual annotations. Additionally, a model-agnostic meta-learning (MAML) strategy is incorporated to ensure rapid adaptation to new deployment scenarios with minimal supervision. A new dataset comprising 10,000 labeled images from outdoor RoboCup SPL matches is introduced, used to validate the method, and made available to the community. Experimental results demonstrate that the proposed pipeline outperforms baseline models in terms of accuracy, F1 score, and IoU, while also exhibiting faster convergence.
| null |
https://arxiv.org/abs/2506.16821v1
|
https://arxiv.org/pdf/2506.16821v1.pdf
| null |
[
"Can Lin",
"Daniele Affinita",
"Marco E. P. Zimmatore",
"Daniele Nardi",
"Domenico D. Bloisi",
"Vincenzo Suriani"
] |
[
"Colorization",
"Edge Detection",
"Meta-Learning",
"Self-Supervised Learning",
"Triplet"
] | 2025-06-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "The goal of **Triplet loss**, in the context of Siamese Networks, is to maximize the joint probability among all score-pairs i.e. the product of all probabilities. By using its negative logarithm, we can get the loss formulation as follows:\r\n\r\n$$\r\nL\\_{t}\\left(\\mathcal{V}\\_{p}, \\mathcal{V}\\_{n}\\right)=-\\frac{1}{M N} \\sum\\_{i}^{M} \\sum\\_{j}^{N} \\log \\operatorname{prob}\\left(v p\\_{i}, v n\\_{j}\\right)\r\n$$\r\n\r\nwhere the balance weight $1/MN$ is used to keep the loss with the same scale for different number of instance sets.",
"full_name": "Triplet Loss",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Loss Functions** are used to frame the problem to be optimized within deep learning. Below you will find a continuously updating list of (specialized) loss functions for neutral networks.",
"name": "Loss Functions",
"parent": null
},
"name": "Triplet Loss",
"source_title": "Triplet Loss in Siamese Network for Object Tracking",
"source_url": "http://openaccess.thecvf.com/content_ECCV_2018/html/Xingping_Dong_Triplet_Loss_with_ECCV_2018_paper.html"
},
{
"code_snippet_url": "",
"description": "",
"full_name": "Semi-Pseudo-Label",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Semi-Supervised Learning** methods leverage unlabelled data as well as labelled data to increase performance on machine learning tasks. Below you can find a continuously updating list of semi-supervised learning methods (this may have overlap with self-supervised methods due to evaluation protocol similarity).\r\n\r\n",
"name": "Semi-Supervised Learning Methods",
"parent": null
},
"name": "SPL",
"source_title": "A Novel Neural Network Training Method for Autonomous Driving Using Semi-Pseudo-Labels and 3D Data Augmentations",
"source_url": "https://arxiv.org/abs/2207.09869v1"
}
] |
https://paperswithcode.com/paper/latent-noise-injection-for-private-and
|
2506.16636
| null | null |
Latent Noise Injection for Private and Statistically Aligned Synthetic Data Generation
|
Synthetic Data Generation has become essential for scalable, privacy-preserving statistical analysis. While standard approaches based on generative models, such as Normalizing Flows, have been widely used, they often suffer from slow convergence in high-dimensional settings, frequently converging more slowly than the canonical $1/\sqrt{n}$ rate when approximating the true data distribution. To overcome these limitations, we propose a Latent Noise Injection method using Masked Autoregressive Flows (MAF). Instead of directly sampling from the trained model, our method perturbs each data point in the latent space and maps it back to the data domain. This construction preserves a one to one correspondence between observed and synthetic data, enabling synthetic outputs that closely reflect the underlying distribution, particularly in challenging high-dimensional regimes where traditional sampling struggles. Our procedure satisfies local $(\epsilon, \delta)$-differential privacy and introduces a single perturbation parameter to control the privacy-utility trade-off. Although estimators based on individual synthetic datasets may converge slowly, we show both theoretically and empirically that aggregating across $K$ studies in a meta analysis framework restores classical efficiency and yields consistent, reliable inference. We demonstrate that with a well-calibrated perturbation parameter, Latent Noise Injection achieves strong statistical alignment with the original data and robustness against membership inference attacks. These results position our method as a compelling alternative to conventional flow-based sampling for synthetic data sharing in decentralized and privacy-sensitive domains, such as biomedical research.
| null |
https://arxiv.org/abs/2506.16636v1
|
https://arxiv.org/pdf/2506.16636v1.pdf
| null |
[
"Rex Shen",
"Lu Tian"
] |
[
"Privacy Preserving",
"Synthetic Data Generation"
] | 2025-06-19T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
}
] |
https://paperswithcode.com/paper/multimodal-fusion-slam-with-fourier-attention
|
2506.18204
| null | null |
Multimodal Fusion SLAM with Fourier Attention
|
Visual SLAM is particularly challenging in environments affected by noise, varying lighting conditions, and darkness. Learning-based optical flow algorithms can leverage multiple modalities to address these challenges, but traditional optical flow-based visual SLAM approaches often require significant computational resources.To overcome this limitation, we propose FMF-SLAM, an efficient multimodal fusion SLAM method that utilizes fast Fourier transform (FFT) to enhance the algorithm efficiency. Specifically, we introduce a novel Fourier-based self-attention and cross-attention mechanism to extract features from RGB and depth signals. We further enhance the interaction of multimodal features by incorporating multi-scale knowledge distillation across modalities. We also demonstrate the practical feasibility of FMF-SLAM in real-world scenarios with real time performance by integrating it with a security robot by fusing with a global positioning module GNSS-RTK and global Bundle Adjustment. Our approach is validated using video sequences from TUM, TartanAir, and our real-world datasets, showcasing state-of-the-art performance under noisy, varying lighting, and dark conditions.Our code and datasets are available at https://github.com/youjie-zhou/FMF-SLAM.git.
|
Visual SLAM is particularly challenging in environments affected by noise, varying lighting conditions, and darkness.
|
https://arxiv.org/abs/2506.18204v2
|
https://arxiv.org/pdf/2506.18204v2.pdf
| null |
[
"Youjie Zhou",
"Guofeng Mei",
"Yiming Wang",
"Yi Wan",
"Fabio Poiesi"
] |
[
"Knowledge Distillation",
"Optical Flow Estimation"
] | 2025-06-22T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/",
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
}
] |
https://paperswithcode.com/paper/ra-nerf-robust-neural-radiance-field
|
2506.15242
| null | null |
RA-NeRF: Robust Neural Radiance Field Reconstruction with Accurate Camera Pose Estimation under Complex Trajectories
|
Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have emerged as powerful tools for 3D reconstruction and SLAM tasks. However, their performance depends heavily on accurate camera pose priors. Existing approaches attempt to address this issue by introducing external constraints but fall short of achieving satisfactory accuracy, particularly when camera trajectories are complex. In this paper, we propose a novel method, RA-NeRF, capable of predicting highly accurate camera poses even with complex camera trajectories. Following the incremental pipeline, RA-NeRF reconstructs the scene using NeRF with photometric consistency and incorporates flow-driven pose regulation to enhance robustness during initialization and localization. Additionally, RA-NeRF employs an implicit pose filter to capture the camera movement pattern and eliminate the noise for pose estimation. To validate our method, we conduct extensive experiments on the Tanks\&Temple dataset for standard evaluation, as well as the NeRFBuster dataset, which presents challenging camera pose trajectories. On both datasets, RA-NeRF achieves state-of-the-art results in both camera pose estimation and visual quality, demonstrating its effectiveness and robustness in scene reconstruction under complex pose trajectories.
| null |
https://arxiv.org/abs/2506.15242v2
|
https://arxiv.org/pdf/2506.15242v2.pdf
| null |
[
"Qingsong Yan",
"Qiang Wang",
"Kaiyong Zhao",
"Jie Chen",
"Bo Li",
"Xiaowen Chu",
"Fei Deng"
] |
[
"3DGS",
"3D Reconstruction",
"Camera Pose Estimation",
"NeRF",
"Pose Estimation"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/smpl-normal-map-is-all-you-need-for-single
|
2506.12793
| null | null |
SMPL Normal Map Is All You Need for Single-view Textured Human Reconstruction
|
Single-view textured human reconstruction aims to reconstruct a clothed 3D digital human by inputting a monocular 2D image. Existing approaches include feed-forward methods, limited by scarce 3D human data, and diffusion-based methods, prone to erroneous 2D hallucinations. To address these issues, we propose a novel SMPL normal map Equipped 3D Human Reconstruction (SEHR) framework, integrating a pretrained large 3D reconstruction model with human geometry prior. SEHR performs single-view human reconstruction without using a preset diffusion model in one forward propagation. Concretely, SEHR consists of two key components: SMPL Normal Map Guidance (SNMG) and SMPL Normal Map Constraint (SNMC). SNMG incorporates SMPL normal maps into an auxiliary network to provide improved body shape guidance. SNMC enhances invisible body parts by constraining the model to predict an extra SMPL normal Gaussians. Extensive experiments on two benchmark datasets demonstrate that SEHR outperforms existing state-of-the-art methods.
| null |
https://arxiv.org/abs/2506.12793v1
|
https://arxiv.org/pdf/2506.12793v1.pdf
| null |
[
"Wenhao Shen",
"Gangjian Zhang",
"Jianfeng Zhang",
"Yu Feng",
"Nanjie Yao",
"Xuanmeng Zhang",
"Hao Wang"
] |
[
"3D Human Reconstruction",
"3D Reconstruction",
"All"
] | 2025-06-15T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/p2nia-privacy-preserving-non-iterative
|
2504.00874
| null | null |
P2NIA: Privacy-Preserving Non-Iterative Auditing
|
The emergence of AI legislation has increased the need to assess the ethical compliance of high-risk AI systems. Traditional auditing methods rely on platforms' application programming interfaces (APIs), where responses to queries are examined through the lens of fairness requirements. However, such approaches put a significant burden on platforms, as they are forced to maintain APIs while ensuring privacy, facing the possibility of data leaks. This lack of proper collaboration between the two parties, in turn, causes a significant challenge to the auditor, who is subject to estimation bias as they are unaware of the data distribution of the platform. To address these two issues, we present P2NIA, a novel auditing scheme that proposes a mutually beneficial collaboration for both the auditor and the platform. Extensive experiments demonstrate P2NIA's effectiveness in addressing both issues. In summary, our work introduces a privacy-preserving and non-iterative audit scheme that enhances fairness assessments using synthetic or local data, avoiding the challenges associated with traditional API-based audits.
| null |
https://arxiv.org/abs/2504.00874v1
|
https://arxiv.org/pdf/2504.00874v1.pdf
| null |
[
"Jade Garcia Bourrée",
"Hadrien Lautraite",
"Sébastien Gambs",
"Gilles Tredan",
"Erwan Le Merrer",
"Benoît Rottembourg"
] |
[
"Fairness",
"Privacy Preserving"
] | 2025-04-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generalizing-vision-language-models-to-novel
|
2506.18504
| null | null |
Generalizing Vision-Language Models to Novel Domains: A Comprehensive Survey
|
Recently, vision-language pretraining has emerged as a transformative technique that integrates the strengths of both visual and textual modalities, resulting in powerful vision-language models (VLMs). Leveraging web-scale pretraining data, these models exhibit strong zero-shot capabilities. However, their performance often deteriorates when confronted with domain-specific or specialized generalization tasks. To address this, a growing body of research focuses on transferring or generalizing the rich knowledge embedded in VLMs to various downstream applications. This survey aims to comprehensively summarize the generalization settings, methodologies, benchmarking and results in VLM literatures. Delving into the typical VLM structures, current literatures are categorized into prompt-based, parameter-based and feature-based methods according to the transferred modules. The differences and characteristics in each category are furthered summarized and discussed by revisiting the typical transfer learning (TL) settings, providing novel interpretations for TL in the era of VLMs. Popular benchmarks for VLM generalization are further introduced with thorough performance comparisons among the reviewed methods. Following the advances in large-scale generalizable pretraining, this survey also discusses the relations and differences between VLMs and up-to-date multimodal large language models (MLLM), e.g., DeepSeek-VL. By systematically reviewing the surging literatures in vision-language research from a novel and practical generalization prospective, this survey contributes to a clear landscape of current and future multimodal researches.
| null |
https://arxiv.org/abs/2506.18504v1
|
https://arxiv.org/pdf/2506.18504v1.pdf
| null |
[
"Xinyao Li",
"Jingjing Li",
"Fengling Li",
"Lei Zhu",
"Yang Yang",
"Heng Tao Shen"
] |
[
"Benchmarking",
"Survey",
"Transfer Learning"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ancient-script-image-recognition-and
|
2506.19208
| null | null |
Ancient Script Image Recognition and Processing: A Review
|
Ancient scripts, e.g., Egyptian hieroglyphs, Oracle Bone Inscriptions, and Ancient Greek inscriptions, serve as vital carriers of human civilization, embedding invaluable historical and cultural information. Automating ancient script image recognition has gained importance, enabling large-scale interpretation and advancing research in archaeology and digital humanities. With the rise of deep learning, this field has progressed rapidly, with numerous script-specific datasets and models proposed. While these scripts vary widely, spanning phonographic systems with limited glyphs to logographic systems with thousands of complex symbols, they share common challenges and methodological overlaps. Moreover, ancient scripts face unique challenges, including imbalanced data distribution and image degradation, which have driven the development of various dedicated methods. This survey provides a comprehensive review of ancient script image recognition methods. We begin by categorizing existing studies based on script types and analyzing respective recognition methods, highlighting both their differences and shared strategies. We then focus on challenges unique to ancient scripts, systematically examining their impact and reviewing recent solutions, including few-shot learning and noise-robust techniques. Finally, we summarize current limitations and outline promising future directions. Our goal is to offer a structured, forward-looking perspective to support ongoing advancements in the recognition, interpretation, and decipherment of ancient scripts.
| null |
https://arxiv.org/abs/2506.19208v1
|
https://arxiv.org/pdf/2506.19208v1.pdf
| null |
[
"Xiaolei Diao",
"Rite Bo",
"Yanling Xiao",
"Lida Shi",
"Zhihan Zhou",
"Hao Xu",
"Chuntao Li",
"Xiongfeng Tang",
"Massimo Poesio",
"Cédric M. John",
"Daqian Shi"
] |
[
"Decipherment",
"Few-Shot Learning"
] | 2025-06-24T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/deep-learning-based-multi-object-tracking-a
|
2506.13457
| null | null |
Deep Learning-Based Multi-Object Tracking: A Comprehensive Survey from Foundations to State-of-the-Art
|
Multi-object tracking (MOT) is a core task in computer vision that involves detecting objects in video frames and associating them across time. The rise of deep learning has significantly advanced MOT, particularly within the tracking-by-detection paradigm, which remains the dominant approach. Advancements in modern deep learning-based methods accelerated in 2022 with the introduction of ByteTrack for tracking-by-detection and MOTR for end-to-end tracking. Our survey provides an in-depth analysis of deep learning-based MOT methods, systematically categorizing tracking-by-detection approaches into five groups: joint detection and embedding, heuristic-based, motion-based, affinity learning, and offline methods. In addition, we examine end-to-end tracking methods and compare them with existing alternative approaches. We evaluate the performance of recent trackers across multiple benchmarks and specifically assess their generality by comparing results across different domains. Our findings indicate that heuristic-based methods achieve state-of-the-art results on densely populated datasets with linear object motion, while deep learning-based association methods, in both tracking-by-detection and end-to-end approaches, excel in scenarios with complex motion patterns.
| null |
https://arxiv.org/abs/2506.13457v1
|
https://arxiv.org/pdf/2506.13457v1.pdf
| null |
[
"Momir Adžemović"
] |
[
"Deep Learning",
"Multi-Object Tracking",
"Object Tracking"
] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/recent-advances-in-multi-agent-human
|
2506.14831
| null | null |
Recent Advances in Multi-Agent Human Trajectory Prediction: A Comprehensive Review
|
With the emergence of powerful data-driven methods in human trajectory prediction (HTP), gaining a finer understanding of multi-agent interactions lies within hand's reach, with important implications in areas such as autonomous navigation and crowd modeling. This survey reviews some of the most recent advancements in deep learning-based multi-agent trajectory prediction, focusing on studies published between 2020 and 2024. We categorize the existing methods based on their architectural design, their input representations, and their overall prediction strategies, placing a particular emphasis on models evaluated using the ETH/UCY benchmark. Furthermore, we highlight key challenges and future research directions in the field of multi-agent HTP.
| null |
https://arxiv.org/abs/2506.14831v1
|
https://arxiv.org/pdf/2506.14831v1.pdf
| null |
[
"Céline Finet",
"Stephane Da Silva Martins",
"Jean-Bernard Hayet",
"Ioannis Karamouzas",
"Javad Amirian",
"Sylvie Le Hégarat-Mascle",
"Julien Pettré",
"Emanuel Aldea"
] |
[
"Autonomous Navigation",
"Prediction",
"Trajectory Prediction"
] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/r3evision-a-survey-on-robust-rendering
|
2506.16262
| null | null |
R3eVision: A Survey on Robust Rendering, Restoration, and Enhancement for 3D Low-Level Vision
|
Neural rendering methods such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have achieved significant progress in photorealistic 3D scene reconstruction and novel view synthesis. However, most existing models assume clean and high-resolution (HR) multi-view inputs, which limits their robustness under real-world degradations such as noise, blur, low-resolution (LR), and weather-induced artifacts. To address these limitations, the emerging field of 3D Low-Level Vision (3D LLV) extends classical 2D Low-Level Vision tasks including super-resolution (SR), deblurring, weather degradation removal, restoration, and enhancement into the 3D spatial domain. This survey, referred to as R\textsuperscript{3}eVision, provides a comprehensive overview of robust rendering, restoration, and enhancement for 3D LLV by formalizing the degradation-aware rendering problem and identifying key challenges related to spatio-temporal consistency and ill-posed optimization. Recent methods that integrate LLV into neural rendering frameworks are categorized to illustrate how they enable high-fidelity 3D reconstruction under adverse conditions. Application domains such as autonomous driving, AR/VR, and robotics are also discussed, where reliable 3D perception from degraded inputs is critical. By reviewing representative methods, datasets, and evaluation protocols, this work positions 3D LLV as a fundamental direction for robust 3D content generation and scene-level reconstruction in real-world environments.
|
This survey, referred to as R\textsuperscript{3}eVision, provides a comprehensive overview of robust rendering, restoration, and enhancement for 3D LLV by formalizing the degradation-aware rendering problem and identifying key challenges related to spatio-temporal consistency and ill-posed optimization.
|
https://arxiv.org/abs/2506.16262v2
|
https://arxiv.org/pdf/2506.16262v2.pdf
| null |
[
"Weeyoung Kwon",
"Jeahun Sung",
"Minkyu Jeon",
"Chanho Eom",
"Jihyong Oh"
] |
[
"3DGS",
"3D Reconstruction",
"3D Scene Reconstruction",
"Autonomous Driving",
"Deblurring",
"NeRF",
"Neural Rendering",
"Novel View Synthesis",
"Super-Resolution"
] | 2025-06-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-comprehensive-survey-on-deep-learning-1
|
2506.13201
| null | null |
A Comprehensive Survey on Deep Learning Solutions for 3D Flood Mapping
|
Flooding remains a major global challenge, worsened by climate change and urbanization, demanding advanced solutions for effective disaster management. While traditional 2D flood mapping techniques provide limited insights, 3D flood mapping, powered by deep learning (DL), offers enhanced capabilities by integrating flood extent and depth. This paper presents a comprehensive survey of deep learning-based 3D flood mapping, emphasizing its advancements over 2D maps by integrating flood extent and depth for effective disaster management and urban planning. The survey categorizes deep learning techniques into task decomposition and end-to-end approaches, applicable to both static and dynamic flood features. We compare key DL architectures, highlighting their respective roles in enhancing prediction accuracy and computational efficiency. Additionally, this work explores diverse data sources such as digital elevation models, satellite imagery, rainfall, and simulated data, outlining their roles in 3D flood mapping. The applications reviewed range from real-time flood prediction to long-term urban planning and risk assessment. However, significant challenges persist, including data scarcity, model interpretability, and integration with traditional hydrodynamic models. This survey concludes by suggesting future directions to address these limitations, focusing on enhanced datasets, improved models, and policy implications for flood management. This survey aims to guide researchers and practitioners in leveraging DL techniques for more robust and reliable 3D flood mapping, fostering improved flood management strategies.
| null |
https://arxiv.org/abs/2506.13201v1
|
https://arxiv.org/pdf/2506.13201v1.pdf
| null |
[
"Wenfeng Jia",
"Bin Liang",
"Yuxi Liu",
"Muhammad Arif Khan",
"Lihong Zheng"
] |
[
"Computational Efficiency",
"Deep Learning",
"Management",
"Survey"
] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pass-private-attributes-protection-with
|
2506.07308
| null | null |
PASS: Private Attributes Protection with Stochastic Data Substitution
|
The growing Machine Learning (ML) services require extensive collections of user data, which may inadvertently include people's private information irrelevant to the services. Various studies have been proposed to protect private attributes by removing them from the data while maintaining the utilities of the data for downstream tasks. Nevertheless, as we theoretically and empirically show in the paper, these methods reveal severe vulnerability because of a common weakness rooted in their adversarial training based strategies. To overcome this limitation, we propose a novel approach, PASS, designed to stochastically substitute the original sample with another one according to certain probabilities, which is trained with a novel loss function soundly derived from information-theoretic objective defined for utility-preserving private attributes protection. The comprehensive evaluation of PASS on various datasets of different modalities, including facial images, human activity sensory signals, and voice recording datasets, substantiates PASS's effectiveness and generalizability.
| null |
https://arxiv.org/abs/2506.07308v1
|
https://arxiv.org/pdf/2506.07308v1.pdf
| null |
[
"Yizhuo Chen",
"Chun-Fu",
"Chen",
"Hsiang Hsu",
"Shaohan Hu",
"Tarek Abdelzaher"
] |
[] | 2025-06-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/jointrank-rank-large-set-with-single-pass
|
2506.22262
| null | null |
JointRank: Rank Large Set with Single Pass
|
Efficiently ranking relevant items from large candidate pools is a cornerstone of modern information retrieval systems -- such as web search, recommendation, and retrieval-augmented generation. Listwise rerankers, which improve relevance by jointly considering multiple candidates, are often limited in practice: either by model input size constraints, or by degraded quality when processing large sets. We propose a model-agnostic method for fast reranking large sets that exceed a model input limits. The method first partitions candidate items into overlapping blocks, each of which is ranked independently in parallel. Implicit pairwise comparisons are then derived from these local rankings. Finally, these comparisons are aggregated to construct a global ranking using algorithms such as Winrate or PageRank. Experiments on TREC DL-2019 show that our method achieves an nDCG@10 of 70.88 compared to the 57.68 for full-context listwise approach using gpt-4.1-mini as long-context model, while reducing latency from 21 to 8 seconds. The implementation of the algorithm and the experiments is available in the repository: https://github.com/V3RGANz/jointrank
|
Finally, these comparisons are aggregated to construct a global ranking using algorithms such as Winrate or PageRank.
|
https://arxiv.org/abs/2506.22262v1
|
https://arxiv.org/pdf/2506.22262v1.pdf
| null |
[
"Evgeny Dedov"
] |
[
"Information Retrieval",
"Reranking",
"Retrieval",
"Retrieval-augmented Generation"
] | 2025-06-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gazal-r1-achieving-state-of-the-art-medical
|
2506.21594
| null | null |
Gazal-R1: Achieving State-of-the-Art Medical Reasoning with Parameter-Efficient Two-Stage Training
|
We present Gazal-R1, a 32-billion-parameter language model that achieves state-of-the-art performance in medical reasoning while providing transparent, step-by-step explanations for clinical decision-making. Built upon Qwen3 32B, our model demonstrates that strategic training can enable mid-sized models to outperform significantly larger counterparts in specialized domains. We developed a novel two-stage training pipeline: first, supervised fine-tuning on a carefully curated dataset of 107,033 synthetic medical reasoning examples that teaches structured clinical thinking, enhanced by advanced parameter-efficient techniques including Weight-Decomposed Low-Rank Adaptation (DoRA) and Rank-Stabilized LoRA (rsLoRA); second, reinforcement learning using Group Relative Policy Optimization (GRPO) with a sophisticated multi-component reward system that refines accuracy, format adherence, and reasoning quality. Gazal-R1 achieves exceptional performance across medical benchmarks, scoring 87.1% on MedQA, 81.6% on MMLU Pro (Medical), and 79.6% on PubMedQA, surpassing models up to 12x larger. Beyond its strong empirical results, this work provides detailed insights into the challenges of training reasoning-capable models in specialized domains, including issues with reward hacking, training instability, and the fundamental tension between factual recall and detailed reasoning. Our methodology offers a reproducible framework for developing high-capability, domain-specific language models that balance performance, efficiency, and explainability.
| null |
https://arxiv.org/abs/2506.21594v1
|
https://arxiv.org/pdf/2506.21594v1.pdf
| null |
[
"Ahmed M. Adly",
"Mostafa Samy",
"Amr Fawzy"
] |
[
"MedQA",
"MMLU"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transformers-meet-small-datasets
| null | null | null |
Transformers Meet Small Datasets
|
The research and application areas of transformers have been extensively enlarged due to the success of vision transformers (ViTs). However, due to the lack of local content acquisition capabilities, the pure transformer architectures cannot be trained directly on small datasets. In this work, we first propose a new hybrid model by combining the transformer and convolution neural network (CNN). The proposed model improves the classification ability on small datasets. This is accomplished by introducing more convolution operations in the transformer’s two core sections: 1) Instead of the original multi-head attention mechanism, we design a convolutional parameter sharing multi-head attention (CPSA) block that incorporates the convolutional parameter sharing projection in the attention mechanism; 2) the feed-forward network in each transformer encoder block is replaced with a local feed-forward network (LFFN) block that introduces a sandglass block with more depth-wise convolutions to provide more locality to the transformers. We achieve state-of-the-art results when training from scratch on 4 small datasets as compared with the transformers and CNNs without extensive computing resources and auxiliary training. The proposed strategy opens up new paths for the application of transformers on small datasets.
| null |
https://ieeexplore.ieee.org/document/9944625
|
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9944625
|
IEEE Access 2022 11
|
[
"Ran Shao",
"Xiao-Jun Bi"
] |
[
"Classification"
] | 2022-11-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.",
"full_name": "Linear Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Linear Layer",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/pytorch/vision/blob/7c077f6a986f05383bcb86b535aedb5a63dd5c4b/torchvision/models/resnet.py#L118",
"description": "**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. \r\n\r\nFormally, denoting the desired underlying mapping as $\\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\\mathcal{F}({x}):=\\mathcal{H}({x})-{x}$. The original mapping is recast into $\\mathcal{F}({x})+{x}$.\r\n\r\nThe intuition is that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.",
"full_name": "Residual Connection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Skip Connections** allow layers to skip layers and connect to layers further up the network, allowing for information to flow more easily up the network. Below you can find a continuously updating list of skip connection methods.",
"name": "Skip Connections",
"parent": null
},
"name": "Residual Connection",
"source_title": "Deep Residual Learning for Image Recognition",
"source_url": "http://arxiv.org/abs/1512.03385v1"
},
{
"code_snippet_url": "",
"description": "",
"full_name": "Attention Is All You Need",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "Attention",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/jadore801120/attention-is-all-you-need-pytorch/blob/fec78a687210851f055f792d45300d27cc60ae41/transformer/SubLayers.py#L9",
"description": "**Multi-head Attention** is a module for attention mechanisms which runs through an attention mechanism several times in parallel. The independent attention outputs are then concatenated and linearly transformed into the expected dimension. Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies). \r\n\r\n$$ \\text{MultiHead}\\left(\\textbf{Q}, \\textbf{K}, \\textbf{V}\\right) = \\left[\\text{head}\\_{1},\\dots,\\text{head}\\_{h}\\right]\\textbf{W}_{0}$$\r\n\r\n$$\\text{where} \\text{ head}\\_{i} = \\text{Attention} \\left(\\textbf{Q}\\textbf{W}\\_{i}^{Q}, \\textbf{K}\\textbf{W}\\_{i}^{K}, \\textbf{V}\\textbf{W}\\_{i}^{V} \\right) $$\r\n\r\nAbove $\\textbf{W}$ are all learnable parameter matrices.\r\n\r\nNote that [scaled dot-product attention](https://paperswithcode.com/method/scaled) is most commonly used in this module, although in principle it can be swapped out for other types of attention mechanism.\r\n\r\nSource: [Lilian Weng](https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html#a-family-of-attention-mechanisms)",
"full_name": "Multi-Head Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Attention Modules** refer to modules that incorporate attention mechanisms. For example, multi-head attention is a module that incorporates multiple attention heads. Below you can find a continuously updating list of attention modules.",
"name": "Attention Modules",
"parent": "Attention"
},
"name": "Multi-Head Attention",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/google-research/vision_transformer",
"description": "The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.",
"full_name": "Vision Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Vision Transformer",
"source_title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
"source_url": "https://arxiv.org/abs/2010.11929v2"
}
] |
https://paperswithcode.com/paper/discovering-multiple-antibiotic-resistance
| null | null | null |
Discovering multiple antibiotic resistance phenotypes using diverse top-k subgroup list discovery
|
Antibiotic resistance is one of the major global threats to human health and occurs when antibiotics lose their ability to combat bacterial infections. In this problem, a clinical decision support system could use phenotypes in order to alert clinicians of the emergence of patterns of antibiotic resistance in patients. Patient phenotyping is the task of finding a set of patient characteristics related to a specific medical problem such as the one described in this work. However, a single explanation of a medical phenomenon might be useless in the eyes of a clinical expert and be discarded. The discovery of multiple patient phenotypes for the same medical phenomenon would be useful in such cases. Therefore, in this work, we define the problem of mining diverse top-k phenotypes and propose the EDSLM algorithm, which is based on the Subgroup Discovery technique, the subgroup list model, and the Minimum Description Length principle. Our proposal provides clinicians with a method with which to obtain multiple and diverse phenotypes of a set of patients. We show a real use case of phenotyping in antimicrobial resistance using the well-known MIMIC-III dataset.
|
The discovery of multiple patient phenotypes for the same medical phenomenon would be useful in such cases.
|
https://doi.org/10.1016/j.artmed.2025.103200
|
https://doi.org/10.1016/j.artmed.2025.103200
|
Artificial Intelligence in Medicine 2025 6
|
[
"Antonio Lopez-Martinez-Carrasco",
"Hugo M. Proença",
"Jose M. Juarez",
"Matthijs van Leeuwen",
"Manuel Campos"
] |
[
"Data Mining",
"Decision Making",
"Diverse Top-k Subgroup List Discovery",
"Patient Phenotyping",
"Subgroup Discovery"
] | 2025-06-26T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/binned-semiparametric-bayesian-networks
|
2506.21997
| null | null |
Binned semiparametric Bayesian networks
|
This paper introduces a new type of probabilistic semiparametric model that takes advantage of data binning to reduce the computational cost of kernel density estimation in nonparametric distributions. Two new conditional probability distributions are developed for the new binned semiparametric Bayesian networks, the sparse binned kernel density estimation and the Fourier kernel density estimation. These two probability distributions address the curse of dimensionality, which typically impacts binned models, by using sparse tensors and restricting the number of parent nodes in conditional probability calculations. To evaluate the proposal, we perform a complexity analysis and conduct several comparative experiments using synthetic data and datasets from the UCI Machine Learning repository. The experiments include different binning rules, parent restrictions, grid sizes, and number of instances to get a holistic view of the model's behavior. As a result, our binned semiparametric Bayesian networks achieve structural learning and log-likelihood estimations with no statistically significant differences compared to the semiparametric Bayesian networks, but at a much higher speed. Thus, the new binned semiparametric Bayesian networks prove to be a reliable and more efficient alternative to their non-binned counterparts.
|
This paper introduces a new type of probabilistic semiparametric model that takes advantage of data binning to reduce the computational cost of kernel density estimation in nonparametric distributions.
|
https://arxiv.org/abs/2506.21997v1
|
https://arxiv.org/pdf/2506.21997v1.pdf
| null |
[
"Rafael Sojo",
"Javier Díaz-Rozo",
"Concha Bielza",
"Pedro Larrañaga"
] |
[
"Density Estimation"
] | 2025-06-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/retfiner-a-vision-language-refinement-scheme
|
2506.22149
| null | null |
RetFiner: A Vision-Language Refinement Scheme for Retinal Foundation Models
|
The rise of imaging techniques such as optical coherence tomography (OCT) and advances in deep learning (DL) have enabled clinicians and researchers to streamline retinal disease staging. A popular DL approach is self-supervised learning (SSL), where models learn from vast amounts of unlabeled data, avoiding costly annotation. SSL has allowed the development of foundation models (FMs), large models that can be used for a variety of downstream tasks. However, existing FMs for OCT, trained solely on image data, lack a comprehensive and robust semantic understanding of images, as evidenced by their downstream performance (especially for complex tasks), and thus require supervised fine-tuning (which may be unfeasible) to better adapt to specific applications and populations. To address this, we propose RetFiner, an SSL vision-language refinement scheme that improves the representations of existing FMs and enables their efficient and direct adaptation to specific populations for improved downstream performance. Our method uses a diverse set of training objectives which take advantage of the rich supervisory signal found in textual data. We tested RetFiner on the retinal FMs RETFound, UrFound, and VisionFM, showing significant improvements in linear probing performance on seven highly diverse OCT classification tasks, with an average increase of 5.8, 3.9, and 2.1 percentage points over their baselines, respectively. Our code and model weights are publicly available at https://github.com/ronnief1/RetFiner.
|
To address this, we propose RetFiner, an SSL vision-language refinement scheme that improves the representations of existing FMs and enables their efficient and direct adaptation to specific populations for improved downstream performance.
|
https://arxiv.org/abs/2506.22149v1
|
https://arxiv.org/pdf/2506.22149v1.pdf
| null |
[
"Ronald Fecso",
"José Morano",
"Ursula Schmidt-Erfurth",
"Hrvoje Bogunović"
] |
[
"Self-Supervised Learning"
] | 2025-06-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/enlvam-enhanced-left-ventricle-linear
|
2506.22063
| null | null |
EnLVAM: Enhanced Left Ventricle Linear Measurements Utilizing Anatomical Motion Mode
|
Linear measurements of the left ventricle (LV) in the Parasternal Long Axis (PLAX) view using B-mode echocardiography are crucial for cardiac assessment. These involve placing 4-6 landmarks along a virtual scanline (SL) perpendicular to the LV axis near the mitral valve tips. Manual placement is time-consuming and error-prone, while existing deep learning methods often misalign landmarks, causing inaccurate measurements. We propose a novel framework that enhances LV measurement accuracy by enforcing straight-line constraints. A landmark detector is trained on Anatomical M-Mode (AMM) images, computed in real time from B-mode videos, then transformed back to B-mode space. This approach addresses misalignment and reduces measurement errors. Experiments show improved accuracy over standard B-mode methods, and the framework generalizes well across network architectures. Our semi-automatic design includes a human-in-the-loop step where the user only places the SL, simplifying interaction while preserving alignment flexibility and clinical relevance.
| null |
https://arxiv.org/abs/2506.22063v1
|
https://arxiv.org/pdf/2506.22063v1.pdf
| null |
[
"Durgesh K. Singh",
"Ahcene Boubekki",
"Qing Cao",
"Svein Arne Aase",
"Robert Jenssen",
"Michael Kampffmeyer"
] |
[] | 2025-06-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cawr-corruption-averse-advantage-weighted
|
2506.15654
| null | null |
CAWR: Corruption-Averse Advantage-Weighted Regression for Robust Policy Optimization
|
Offline reinforcement learning (offline RL) algorithms often require additional constraints or penalty terms to address distribution shift issues, such as adding implicit or explicit policy constraints during policy optimization to reduce the estimation bias of functions. This paper focuses on a limitation of the Advantage-Weighted Regression family (AWRs), i.e., the potential for learning over-conservative policies due to data corruption, specifically the poor explorations in suboptimal offline data. We study it from two perspectives: (1) how poor explorations impact the theoretically optimal policy based on KL divergence, and (2) how such poor explorations affect the approximation of the theoretically optimal policy. We prove that such over-conservatism is mainly caused by the sensitivity of the loss function for policy optimization to poor explorations, and the proportion of poor explorations in offline datasets. To address this concern, we propose Corruption-Averse Advantage-Weighted Regression (CAWR), which incorporates a set of robust loss functions during policy optimization and an advantage-based prioritized experience replay method to filter out poor explorations. Numerical experiments on the D4RL benchmark show that our method can learn superior policies from suboptimal offline data, significantly enhancing the performance of policy optimization.
|
Offline reinforcement learning (offline RL) algorithms often require additional constraints or penalty terms to address distribution shift issues, such as adding implicit or explicit policy constraints during policy optimization to reduce the estimation bias of functions.
|
https://arxiv.org/abs/2506.15654v1
|
https://arxiv.org/pdf/2506.15654v1.pdf
| null |
[
"Ranting Hu"
] |
[
"D4RL",
"Offline RL",
"regression"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
},
{
"code_snippet_url": "",
"description": "**Prioritized Experience Replay** is a type of [experience replay](https://paperswithcode.com/method/experience-replay) in reinforcement learning where we more frequently replay transitions with high expected learning progress, as measured by the magnitude of their temporal-difference (TD) error. This prioritization can lead to a loss of diversity, which is alleviated with stochastic prioritization, and introduce bias, which can be corrected with importance sampling.\r\n\r\nThe stochastic sampling method interpolates between pure greedy prioritization and uniform random sampling. The probability of being sampled is ensured to be monotonic in a transition's priority, while guaranteeing a non-zero probability even for the lowest-priority transition. Concretely, define the probability of sampling transition $i$ as\r\n\r\n$$P(i) = \\frac{p_i^{\\alpha}}{\\sum_k p_k^{\\alpha}}$$\r\n\r\nwhere $p_i > 0$ is the priority of transition $i$. The exponent $\\alpha$ determines how much prioritization is used, with $\\alpha=0$ corresponding to the uniform case.\r\n\r\nPrioritized replay introduces bias because it changes this distribution in an uncontrolled fashion, and therefore changes the solution that the estimates will converge to. We can correct this bias by using\r\nimportance-sampling (IS) weights:\r\n\r\n$$ w\\_{i} = \\left(\\frac{1}{N}\\cdot\\frac{1}{P\\left(i\\right)}\\right)^{\\beta} $$\r\n\r\nthat fully compensates for the non-uniform probabilities $P\\left(i\\right)$ if $\\beta = 1$. These weights can be folded into the [Q-learning](https://paperswithcode.com/method/q-learning) update by using $w\\_{i}\\delta\\_{i}$ instead of $\\delta\\_{i}$ - weighted IS rather than ordinary IS. For stability reasons, we always normalize weights by $1/\\max\\_{i}w\\_{i}$ so\r\nthat they only scale the update downwards.\r\n\r\nThe two types of prioritization are proportional based, where $p\\_{i} = |\\delta\\_{i}| + \\epsilon$ and rank-based, where $p\\_{i} = \\frac{1}{\\text{rank}\\left(i\\right)}$, the latter where $\\text{rank}\\left(i\\right)$ is the rank of transition $i$ when the replay memory is sorted according to |$\\delta\\_{i}$|, For proportional based, hyperparameters used were $\\alpha = 0.7$, $\\beta\\_{0} = 0.5$. For the rank-based variant, hyperparameters used were $\\alpha = 0.6$, $\\beta\\_{0} = 0.4$.",
"full_name": "Prioritized Experience Replay",
"introduced_year": 2000,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Prioritized Experience Replay",
"source_title": "Prioritized Experience Replay",
"source_url": "http://arxiv.org/abs/1511.05952v4"
},
{
"code_snippet_url": null,
"description": "**Experience Replay** is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e\\_{t} = \\left(s\\_{t}, a\\_{t}, r\\_{t}, s\\_{t+1}\\right)$ in a data-set $D = e\\_{1}, \\cdots, e\\_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.\r\n\r\nImage Credit: [Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788836524)",
"full_name": "Experience Replay",
"introduced_year": 1993,
"main_collection": {
"area": "Reinforcement Learning",
"description": "",
"name": "Replay Memory",
"parent": null
},
"name": "Experience Replay",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/evolutionary-caching-to-accelerate-your-off
|
2506.15682
| null | null |
Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model
|
Diffusion-based image generation models excel at producing high-quality synthetic content, but suffer from slow and computationally expensive inference. Prior work has attempted to mitigate this by caching and reusing features within diffusion transformers across inference steps. These methods, however, often rely on rigid heuristics that result in limited acceleration or poor generalization across architectures. We propose Evolutionary Caching to Accelerate Diffusion models (ECAD), a genetic algorithm that learns efficient, per-model, caching schedules forming a Pareto frontier, using only a small set of calibration prompts. ECAD requires no modifications to network parameters or reference images. It offers significant inference speedups, enables fine-grained control over the quality-latency trade-off, and adapts seamlessly to different diffusion models. Notably, ECAD's learned schedules can generalize effectively to resolutions and model variants not seen during calibration. We evaluate ECAD on PixArt-alpha, PixArt-Sigma, and FLUX-1.dev using multiple metrics (FID, CLIP, Image Reward) across diverse benchmarks (COCO, MJHQ-30k, PartiPrompts), demonstrating consistent improvements over previous approaches. On PixArt-alpha, ECAD identifies a schedule that outperforms the previous state-of-the-art method by 4.47 COCO FID while increasing inference speedup from 2.35x to 2.58x. Our results establish ECAD as a scalable and generalizable approach for accelerating diffusion inference. Our project website is available at https://aniaggarwal.github.io/ecad and our code is available at https://github.com/aniaggarwal/ecad.
|
Diffusion-based image generation models excel at producing high-quality synthetic content, but suffer from slow and computationally expensive inference.
|
https://arxiv.org/abs/2506.15682v1
|
https://arxiv.org/pdf/2506.15682v1.pdf
| null |
[
"Anirud Aggarwal",
"Abhinav Shrivastava",
"Matthew Gwilliam"
] |
[
"Image Generation"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/OpenAI/CLIP",
"description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)",
"full_name": "Contrastive Language-Image Pre-training",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Representations",
"parent": null
},
"name": "CLIP",
"source_title": "Learning Transferable Visual Models From Natural Language Supervision",
"source_url": "https://arxiv.org/abs/2103.00020v1"
},
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/actalign-zero-shot-fine-grained-video
|
2506.22967
| null | null |
ActAlign: Zero-Shot Fine-Grained Video Classification via Language-Guided Sequence Alignment
|
We address the task of zero-shot fine-grained video classification, where no video examples or temporal annotations are available for unseen action classes. While contrastive vision-language models such as SigLIP demonstrate strong open-set recognition via mean-pooled image-text similarity, they fail to capture the temporal structure critical for distinguishing fine-grained activities. We introduce ActAlign, a zero-shot framework that formulates video classification as sequence alignment. For each class, a large language model generates an ordered sub-action sequence, which is aligned with video frames using Dynamic Time Warping (DTW) in a shared embedding space. Without any video-text supervision or fine-tuning, ActAlign achieves 30.5% accuracy on the extremely challenging ActionAtlas benchmark, where human accuracy is only 61.6%. ActAlign outperforms billion-parameter video-language models while using approximately 8x less parameters. These results demonstrate that structured language priors, combined with classical alignment techniques, offer a scalable and general approach to unlocking the open-set recognition potential of vision-language models for fine-grained video understanding.
|
We introduce ActAlign, a zero-shot framework that formulates video classification as sequence alignment.
|
https://arxiv.org/abs/2506.22967v1
|
https://arxiv.org/pdf/2506.22967v1.pdf
| null |
[
"Amir Aghdam",
"Vincent Tao Hu"
] |
[
"Dynamic Time Warping",
"Large Language Model",
"Open Set Learning",
"text similarity",
"Video Classification",
"Video Understanding"
] | 2025-06-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/computer-aided-multi-stroke-character
|
2506.23106
| null | null |
Computer-Aided Multi-Stroke Character Simplification by Stroke Removal
|
Multi-stroke characters in scripts such as Chinese and Japanese can be highly complex, posing significant challenges for both native speakers and, especially, non-native learners. If these characters can be simplified without degrading their legibility, it could reduce learning barriers for non-native speakers, facilitate simpler and legible font designs, and contribute to efficient character-based communication systems. In this paper, we propose a framework to systematically simplify multi-stroke characters by selectively removing strokes while preserving their overall legibility. More specifically, we use a highly accurate character recognition model to assess legibility and remove those strokes that minimally impact it. Experimental results on 1,256 character classes with 5, 10, 15, and 20 strokes reveal several key findings, including the observation that even after removing multiple strokes, many characters remain distinguishable. These findings suggest the potential for more formalized simplification strategies.
|
Multi-stroke characters in scripts such as Chinese and Japanese can be highly complex, posing significant challenges for both native speakers and, especially, non-native learners.
|
https://arxiv.org/abs/2506.23106v1
|
https://arxiv.org/pdf/2506.23106v1.pdf
| null |
[
"Ryo Ishiyama",
"Shinnosuke Matsuo",
"Seiichi Uchida"
] |
[] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/attention-to-burstiness-low-rank-bilinear
|
2506.22908
| null | null |
Attention to Burstiness: Low-Rank Bilinear Prompt Tuning
|
Visual Prompt Tuning (VPT) is a parameter-efficient fune-tuning technique that adapts a pre-trained vision Transformer (ViT) by learning a small set of parameters in the input space, known as prompts. In VPT, we uncover ``burstiness'' in the values arising from the interaction of image patch embeddings, and the key and query projectors within Transformer's self-attention module. Furthermore, the values of patch embeddings and the key and query projectors exhibit Laplacian and hyper-Laplacian distribution, respectively. Intuitively, these non-Gaussian distributions pose challenges for learning prompts. To address this, we propose whitening these data, de-correlating them and equalizing their variance towards more Gaussian before learning prompts. We derive the whitening matrix over random image patch embeddings and ViT's key and query projectors, and multiply it with the prompt to be learned in a bilinear manner. Surprisingly, this method significantly accelerates prompt tuning and boosts accuracy, e.g., $>$25 accuracy points on the CUB dataset; interestingly, it learns ``bursty prompts''. Extending the bilinear model which is known to introduce burstiness, we present a compact, low-rank version by learning two smaller matrices whose multiplication yields the final prompts. We call the proposed methods Bilinear Prompt Tuning (BPT). Extensive experiments across multiple benchmark datasets demonstrate that BPT methods not only outperform various VPT methods but also reduce parameter count and computation overhead.
|
Visual Prompt Tuning (VPT) is a parameter-efficient fune-tuning technique that adapts a pre-trained vision Transformer (ViT) by learning a small set of parameters in the input space, known as prompts.
|
https://arxiv.org/abs/2506.22908v1
|
https://arxiv.org/pdf/2506.22908v1.pdf
| null |
[
"Yuzhu Wang",
"Manni Duan",
"Shu Kong"
] |
[
"Visual Prompt Tuning"
] | 2025-06-28T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google-research/vision_transformer",
"description": "The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.",
"full_name": "Vision Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Vision Transformer",
"source_title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
"source_url": "https://arxiv.org/abs/2010.11929v2"
}
] |
https://paperswithcode.com/paper/layer-importance-for-mathematical-reasoning
|
2506.22638
| null | null |
Layer Importance for Mathematical Reasoning is Forged in Pre-Training and Invariant after Post-Training
|
Large language models can exhibit improved mathematical reasoning capabilities following post-training with instruction tuning, reinforcement learning, or knowledge distillation. However, it remains unclear whether these improvements are driven by major changes in transformer layers or from minor adjustments that leave the relative layer importance structures of the base model largely unchanged. We investigate this question through systematic layer-wise ablation experiments, examining base, instruction-tuned, knowledge-distilled, and reinforcement learning variants on mathematical reasoning benchmarks. Our findings show that mathematical reasoning gives rise to a specific layer importance structure, and this structure persists across all post-training paradigms. Removal of such layers causes accuracy drops of up to 80%. In contrast, non-mathematical tasks like factual recall exhibit no critical layers. This distinction suggests that mathematical reasoning requires specialized layers that emerge during pre-training, while other non-reasoning tasks do not. From an information-theoretic perspective, we also observe that these critical layers are the same layers where major representational transformation occurs.
| null |
https://arxiv.org/abs/2506.22638v1
|
https://arxiv.org/pdf/2506.22638v1.pdf
| null |
[
"Aadim Nepal",
"Safal Shrestha",
"Anubhav Shrestha",
"Minwu Kim",
"Keith Ross"
] |
[
"Knowledge Distillation",
"Mathematical Reasoning",
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-06-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
}
] |
https://paperswithcode.com/paper/crisp-sam2-sam2-with-cross-modal-interaction
|
2506.23121
| null | null |
CRISP-SAM2: SAM2 with Cross-Modal Interaction and Semantic Prompting for Multi-Organ Segmentation
|
Multi-organ medical segmentation is a crucial component of medical image processing, essential for doctors to make accurate diagnoses and develop effective treatment plans. Despite significant progress in this field, current multi-organ segmentation models often suffer from inaccurate details, dependence on geometric prompts and loss of spatial information. Addressing these challenges, we introduce a novel model named CRISP-SAM2 with CRoss-modal Interaction and Semantic Prompting based on SAM2. This model represents a promising approach to multi-organ medical segmentation guided by textual descriptions of organs. Our method begins by converting visual and textual inputs into cross-modal contextualized semantics using a progressive cross-attention interaction mechanism. These semantics are then injected into the image encoder to enhance the detailed understanding of visual information. To eliminate reliance on geometric prompts, we use a semantic prompting strategy, replacing the original prompt encoder to sharpen the perception of challenging targets. In addition, a similarity-sorting self-updating strategy for memory and a mask-refining process is applied to further adapt to medical imaging and enhance localized details. Comparative experiments conducted on seven public datasets indicate that CRISP-SAM2 outperforms existing models. Extensive analysis also demonstrates the effectiveness of our method, thereby confirming its superior performance, especially in addressing the limitations mentioned earlier. Our code is available at: https://github.com/YU-deep/CRISP\_SAM2.git.
|
These semantics are then injected into the image encoder to enhance the detailed understanding of visual information.
|
https://arxiv.org/abs/2506.23121v1
|
https://arxiv.org/pdf/2506.23121v1.pdf
| null |
[
"Xinlei Yu",
"Chanmiao Wang",
"Hui Jin",
"Ahmed Elazab",
"Gangyong Jia",
"Xiang Wan",
"Changqing Zou",
"Ruiquan Ge"
] |
[
"Organ Segmentation"
] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-semantic-knowledge-conflicts-in
|
2506.22500
| null | null |
Visual-Semantic Knowledge Conflicts in Operating Rooms: Synthetic Data Curation for Surgical Risk Perception in Multimodal Large Language Models
|
Surgical risk identification is critical for patient safety and reducing preventable medical errors. While multimodal large language models (MLLMs) show promise for automated operating room (OR) risk detection, they often exhibit visual-semantic knowledge conflicts (VS-KC), failing to identify visual safety violations despite understanding textual rules. To address this, we introduce a dataset comprising over 34,000 synthetic images generated by diffusion models, depicting operating room scenes containing entities that violate established safety rules. These images were created to alleviate data scarcity and examine MLLMs vulnerabilities. In addition, the dataset includes 214 human-annotated images that serve as a gold-standard reference for validation. This comprehensive dataset, spanning diverse perspectives, stages, and configurations, is designed to expose and study VS-KC. Fine-tuning on OR-VSKC significantly improves MLLMs' detection of trained conflict entities and generalizes well to new viewpoints for these entities, but performance on untrained entity types remains poor, highlighting learning specificity and the need for comprehensive training. The main contributions of this work include: (1) a data generation methodology tailored for rule-violation scenarios; (2) the release of the OR-VSKC dataset and its associated benchmark as open-source resources; and (3) an empirical analysis of violation-sensitive knowledge consistency in representative MLLMs. The dataset and appendix are available at https://github.com/zgg2577/VS-KC.
|
To address this, we introduce a dataset comprising over 34, 000 synthetic images generated by diffusion models, depicting operating room scenes containing entities that violate established safety rules.
|
https://arxiv.org/abs/2506.22500v1
|
https://arxiv.org/pdf/2506.22500v1.pdf
| null |
[
"Weiyi Zhao",
"Xiaoyu Tan",
"Liang Liu",
"Sijia Li",
"Youwei Song",
"Xihe Qiu"
] |
[
"Specificity"
] | 2025-06-25T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/dare-to-plagiarize-plagiarized-painting
|
2506.23132
| null | null |
Dare to Plagiarize? Plagiarized Painting Recognition and Retrieval
|
Art plagiarism detection plays a crucial role in protecting artists' copyrights and intellectual property, yet it remains a challenging problem in forensic analysis. In this paper, we address the task of recognizing plagiarized paintings and explaining the detected plagarisms by retrieving visually similar authentic artworks. To support this study, we construct a dataset by collecting painting photos and synthesizing plagiarized versions using generative AI, tailored to specific artists' styles. We first establish a baseline approach using off-the-shelf features from the visual foundation model DINOv2 to retrieve the most similar images in the database and classify plagiarism based on a similarity threshold. Surprisingly, this non-learned method achieves a high recognition accuracy of 97.2\% but suffers from low retrieval precision 29.0\% average precision (AP). To improve retrieval quality, we finetune DINOv2 with a metric learning loss using positive and negative sample pairs sampled in the database. The finetuned model greatly improves retrieval performance by 12\% AP over the baseline, though it unexpectedly results in a lower recognition accuracy (92.7\%). We conclude with insightful discussions and outline directions for future research.
| null |
https://arxiv.org/abs/2506.23132v1
|
https://arxiv.org/pdf/2506.23132v1.pdf
| null |
[
"Sophie Zhou",
"Shu Kong"
] |
[
"Metric Learning",
"Retrieval"
] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mdpg-multi-domain-diffusion-prior-guidance
|
2506.23701
| null | null |
MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction
|
Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.
|
Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution.
|
https://arxiv.org/abs/2506.23701v1
|
https://arxiv.org/pdf/2506.23701v1.pdf
| null |
[
"Lingtong Zhang",
"Mengdie Song",
"Xiaohan Hao",
"Huayu Mai",
"Bensheng Qiu"
] |
[
"Mamba",
"MRI Reconstruction"
] | 2025-06-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/refine-any-object-in-any-scene
|
2506.23835
| null | null |
Refine Any Object in Any Scene
|
Viewpoint missing of objects is common in scene reconstruction, as camera paths typically prioritize capturing the overall scene structure rather than individual objects. This makes it highly challenging to achieve high-fidelity object-level modeling while maintaining accurate scene-level representation. Addressing this issue is critical for advancing downstream tasks requiring detailed object understanding and appearance modeling. In this paper, we introduce Refine Any object In any ScenE (RAISE), a novel 3D enhancement framework that leverages 3D generative priors to recover fine-grained object geometry and appearance under missing views. Starting from substituting degraded objects with proxies, via a 3D generative model with strong 3D understanding, RAISE progressively refines geometry and texture by aligning each proxy to its degraded counterpart in 7-DOF pose, followed by correcting spatial and appearance inconsistencies via registration-constrained enhancement. This two-stage refinement ensures the high-fidelity geometry and appearance of the original object in unseen views while maintaining consistency in spatial positioning, observed geometry, and appearance. Extensive experiments on challenging benchmarks show that RAISE significantly outperforms state-of-the-art methods in both novel view synthesis and geometry completion tasks. RAISE is made publicly available at https://github.com/PolySummit/RAISE.
|
This two-stage refinement ensures the high-fidelity geometry and appearance of the original object in unseen views while maintaining consistency in spatial positioning, observed geometry, and appearance.
|
https://arxiv.org/abs/2506.23835v1
|
https://arxiv.org/pdf/2506.23835v1.pdf
| null |
[
"Ziwei Chen",
"Ziling Liu",
"Zitong Huang",
"Mingqi Gao",
"Feng Zheng"
] |
[
"Novel View Synthesis",
"Object"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sensing-security-oriented-ofdm-isac-against
|
2506.22824
| null | null |
Sensing Security Oriented OFDM-ISAC Against Multi-Intercept Threats
|
In recent years, security has emerged as a critical aspect of integrated sensing and communication (ISAC) systems. While significant research has focused on secure communications, particularly in ensuring physical layer security, the issue of sensing security has received comparatively less attention. This paper addresses the sensing security problem in ISAC, particularly under the threat of multi-intercept adversaries. We consider a realistic scenario in which the sensing target is an advanced electronic reconnaissance aircraft capable of employing multiple signal interception techniques, such as power detection (PD) and cyclostationary analysis (CA). To evaluate sensing security under such sophisticated threats, we analyze two critical features of the transmitted signal: (i) power distribution and (ii) cyclic spectrum. Further, we introduce a novel ergodic cyclic spectrum metric which leverages the intrinsic mathematical structure of cyclostationary signals to more comprehensively characterize their behavior. Building on this analysis, we formulate a new ISAC design problem that explicitly considers sensing security, and we develop a low-complexity, efficient optimization approach to solve it. Simulation results demonstrate that the proposed metric is both effective and insightful, and that our ISAC design significantly enhances sensing security performance in the presence of multi-intercept threats.
| null |
https://arxiv.org/abs/2506.22824v1
|
https://arxiv.org/pdf/2506.22824v1.pdf
| null |
[
"Lingyun Xu",
"Bowen Wang",
"Huiyong Li",
"Ziyang Cheng"
] |
[
"Integrated sensing and communication",
"ISAC"
] | 2025-06-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mtadiffusion-mask-text-alignment-diffusion-1
|
2506.23482
| null | null |
MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting
|
Advancements in generative models have enabled image inpainting models to generate content within specific regions of an image based on provided prompts and masks. However, existing inpainting methods often suffer from problems such as semantic misalignment, structural distortion, and style inconsistency. In this work, we present MTADiffusion, a Mask-Text Alignment diffusion model designed for object inpainting. To enhance the semantic capabilities of the inpainting model, we introduce MTAPipeline, an automatic solution for annotating masks with detailed descriptions. Based on the MTAPipeline, we construct a new MTADataset comprising 5 million images and 25 million mask-text pairs. Furthermore, we propose a multi-task training strategy that integrates both inpainting and edge prediction tasks to improve structural stability. To promote style consistency, we present a novel inpainting style-consistency loss using a pre-trained VGG network and the Gram matrix. Comprehensive evaluations on BrushBench and EditBench demonstrate that MTADiffusion achieves state-of-the-art performance compared to other methods.
| null |
https://arxiv.org/abs/2506.23482v1
|
https://arxiv.org/pdf/2506.23482v1.pdf
|
CVPR 2025 1
|
[
"Jun Huang",
"Ting Liu",
"Yihang Wu",
"Xiaochao Qu",
"Luoqi Liu",
"Xiaolin Hu"
] |
[
"Image Inpainting"
] | 2025-06-30T00:00:00 |
http://openaccess.thecvf.com//content/CVPR2025/html/Huang_MTADiffusion_Mask_Text_Alignment_Diffusion_Model_for_Object_Inpainting_CVPR_2025_paper.html
|
http://openaccess.thecvf.com//content/CVPR2025/papers/Huang_MTADiffusion_Mask_Text_Alignment_Diffusion_Model_for_Object_Inpainting_CVPR_2025_paper.pdf
|
mtadiffusion-mask-text-alignment-diffusion
| null |
[
{
"code_snippet_url": "",
"description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.",
"full_name": "Inpainting",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.",
"name": "Self-Supervised Learning",
"parent": null
},
"name": "Inpainting",
"source_title": "Context Encoders: Feature Learning by Inpainting",
"source_url": "http://arxiv.org/abs/1604.07379v2"
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/the-trilemma-of-truth-in-large-language
|
2506.23921
| null | null |
The Trilemma of Truth in Large Language Models
|
We often attribute human characteristics to large language models (LLMs) and claim that they "know" certain things. LLMs have an internal probabilistic knowledge that represents information retained during training. How can we assess the veracity of this knowledge? We examine two common methods for probing the veracity of LLMs and discover several assumptions that are flawed. To address these flawed assumptions, we introduce sAwMIL (short for Sparse Aware Multiple-Instance Learning), a probing method that utilizes the internal activations of LLMs to separate statements into true, false, and neither. sAwMIL is based on multiple-instance learning and conformal prediction. We evaluate sAwMIL on 5 validity criteria across 16 open-source LLMs, including both default and chat-based variants, as well as on 3 new datasets. Among the insights we provide are: (1) the veracity signal is often concentrated in the third quarter of an LLM's depth; (2) truth and falsehood signals are not always symmetric; (3) linear probes perform better on chat models than on default models; (4) nonlinear probes may be required to capture veracity signals for some LLMs with reinforcement learning from human feedback or knowledge distillation; and (5) LLMs capture a third type of signal that is distinct from true and false and is neither true nor false. These findings provide a reliable method for verifying what LLMs "know" and how certain they are of their probabilistic internal knowledge.
|
These findings provide a reliable method for verifying what LLMs "know" and how certain they are of their probabilistic internal knowledge.
|
https://arxiv.org/abs/2506.23921v1
|
https://arxiv.org/pdf/2506.23921v1.pdf
| null |
[
"Germans Savcisens",
"Tina Eliassi-Rad"
] |
[
"Attribute",
"Conformal Prediction",
"Knowledge Distillation",
"Multiple Instance Learning",
"Text Classification"
] | 2025-06-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "A **Linear Layer** is a projection $\\mathbf{XW + b}$.",
"full_name": "Linear Layer",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Linear Layer",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure to the right shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called “support vectors”. \r\n\r\nSource: [scikit-learn](https://scikit-learn.org/stable/modules/svm.html)",
"full_name": "Support Vector Machine",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.",
"name": "Non-Parametric Classification",
"parent": null
},
"name": "SVM",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/the-curse-of-depth-in-large-language-models
|
2502.05795
| null | null |
The Curse of Depth in Large Language Models
|
In this paper, we introduce the Curse of Depth, a concept that highlights, explains, and addresses the recent observation in modern Large Language Models(LLMs) where nearly half of the layers are less effective than expected. We first confirm the wide existence of this phenomenon across the most popular families of LLMs such as Llama, Mistral, DeepSeek, and Qwen. Our analysis, theoretically and empirically, identifies that the underlying reason for the ineffectiveness of deep layers in LLMs is the widespread usage of Pre-Layer Normalization (Pre-LN). While Pre-LN stabilizes the training of Transformer LLMs, its output variance exponentially grows with the model depth, which undesirably causes the derivative of the deep Transformer blocks to be an identity matrix, and therefore barely contributes to the training. To resolve this training pitfall, we propose LayerNorm Scaling, which scales the variance of output of the layer normalization inversely by the square root of its depth. This simple modification mitigates the output variance explosion of deeper Transformer layers, improving their contribution. Our experimental results, spanning model sizes from 130M to 1B, demonstrate that LayerNorm Scaling significantly enhances LLM pre-training performance compared to Pre-LN. Moreover, this improvement seamlessly carries over to supervised fine-tuning. All these gains can be attributed to the fact that LayerNorm Scaling enables deeper layers to contribute more effectively during training.
| null |
https://arxiv.org/abs/2502.05795v1
|
https://arxiv.org/pdf/2502.05795v1.pdf
| null |
[
"Wenfang Sun",
"Xinyuan Song",
"Pengxiang Li",
"Lu Yin",
"Yefeng Zheng",
"Shiwei Liu"
] |
[] | 2025-02-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
}
] |
https://paperswithcode.com/paper/fadrm-fast-and-accurate-data-residual
|
2506.24125
| null | null |
FADRM: Fast and Accurate Data Residual Matching for Dataset Distillation
|
Residual connection has been extensively studied and widely applied at the model architecture level. However, its potential in the more challenging data-centric approaches remains unexplored. In this work, we introduce the concept of Data Residual Matching for the first time, leveraging data-level skip connections to facilitate data generation and mitigate data information vanishing. This approach maintains a balance between newly acquired knowledge through pixel space optimization and existing core local information identification within raw data modalities, specifically for the dataset distillation task. Furthermore, by incorporating optimization-level refinements, our method significantly improves computational efficiency, achieving superior performance while reducing training time and peak GPU memory usage by 50%. Consequently, the proposed method Fast and Accurate Data Residual Matching for Dataset Distillation (FADRM) establishes a new state-of-the-art, demonstrating substantial improvements over existing methods across multiple dataset benchmarks in both efficiency and effectiveness. For instance, with ResNet-18 as the student model and a 0.8% compression ratio on ImageNet-1K, the method achieves 47.7% test accuracy in single-model dataset distillation and 50.0% in multi-model dataset distillation, surpassing RDED by +5.7% and outperforming state-of-the-art multi-model approaches, EDC and CV-DD, by +1.4% and +4.0%. Code is available at: https://github.com/Jiacheng8/FADRM.
|
For instance, with ResNet-18 as the student model and a 0. 8% compression ratio on ImageNet-1K, the method achieves 47. 7% test accuracy in single-model dataset distillation and 50. 0% in multi-model dataset distillation, surpassing RDED by +5. 7% and outperforming state-of-the-art multi-model approaches, EDC and CV-DD, by +1. 4% and +4. 0%.
|
https://arxiv.org/abs/2506.24125v1
|
https://arxiv.org/pdf/2506.24125v1.pdf
| null |
[
"Jiacheng Cui",
"Xinyue Bi",
"Yaxin Luo",
"Xiaohan Zhao",
"Jiacheng Liu",
"Zhiqiang Shen"
] |
[
"Computational Efficiency",
"Dataset Distillation",
"GPU"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/context-driven-knowledge-graph-completion
|
2506.23141
| null | null |
Context-Driven Knowledge Graph Completion with Semantic-Aware Relational Message Passing
|
Semantic context surrounding a triplet $(h, r, t)$ is crucial for Knowledge Graph Completion (KGC), providing vital cues for prediction. However, traditional node-based message passing mechanisms, when applied to knowledge graphs, often introduce noise and suffer from information dilution or over-smoothing by indiscriminately aggregating information from all neighboring edges. To address this challenge, we propose a semantic-aware relational message passing. A core innovation of this framework is the introduction of a \textbf{semantic-aware Top-K neighbor selection strategy}. Specifically, this strategy first evaluates the semantic relevance between a central node and its incident edges within a shared latent space, selecting only the Top-K most pertinent ones. Subsequently, information from these selected edges is effectively fused with the central node's own representation using a \textbf{multi-head attention aggregator} to generate a semantically focused node message. In this manner, our model not only leverages the structure and features of edges within the knowledge graph but also more accurately captures and propagates the contextual information most relevant to the specific link prediction task, thereby effectively mitigating interference from irrelevant information. Extensive experiments demonstrate that our method achieves superior performance compared to existing approaches on several established benchmarks.
| null |
https://arxiv.org/abs/2506.23141v1
|
https://arxiv.org/pdf/2506.23141v1.pdf
| null |
[
"Siyuan Li",
"Ruitong Liu",
"Yan Wen",
"Te Sun"
] |
[
"Knowledge Graph Completion",
"Knowledge Graphs",
"Link Prediction",
"Triplet"
] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gaussian-herding-across-pens-an-optimal
|
2506.09534
| null | null |
Gaussian Herding across Pens: An Optimal Transport Perspective on Global Gaussian Reduction for 3DGS
|
3D Gaussian Splatting (3DGS) has emerged as a powerful technique for radiance field rendering, but it typically requires millions of redundant Gaussian primitives, overwhelming memory and rendering budgets. Existing compaction approaches address this by pruning Gaussians based on heuristic importance scores, without global fidelity guarantee. To bridge this gap, we propose a novel optimal transport perspective that casts 3DGS compaction as global Gaussian mixture reduction. Specifically, we first minimize the composite transport divergence over a KD-tree partition to produce a compact geometric representation, and then decouple appearance from geometry by fine-tuning color and opacity attributes with far fewer Gaussian primitives. Experiments on benchmark datasets show that our method (i) yields negligible loss in rendering quality (PSNR, SSIM, LPIPS) compared to vanilla 3DGS with only 10% Gaussians; and (ii) consistently outperforms state-of-the-art 3DGS compaction techniques. Notably, our method is applicable to any stage of vanilla or accelerated 3DGS pipelines, providing an efficient and agnostic pathway to lightweight neural rendering.
| null |
https://arxiv.org/abs/2506.09534v1
|
https://arxiv.org/pdf/2506.09534v1.pdf
| null |
[
"Tao Wang",
"Mengyu Li",
"Geduo Zeng",
"Cheng Meng",
"Qiong Zhang"
] |
[
"3DGS",
"Neural Rendering",
"SSIM"
] | 2025-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Pruning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Model Compression",
"parent": null
},
"name": "Pruning",
"source_title": "Pruning Filters for Efficient ConvNets",
"source_url": "http://arxiv.org/abs/1608.08710v3"
}
] |
https://paperswithcode.com/paper/involvement-drives-complexity-of-language-in
|
2506.22098
| null | null |
Involvement drives complexity of language in online debates
|
Language is a fundamental aspect of human societies, continuously evolving in response to various stimuli, including societal changes and intercultural interactions. Technological advancements have profoundly transformed communication, with social media emerging as a pivotal force that merges entertainment-driven content with complex social dynamics. As these platforms reshape public discourse, analyzing the linguistic features of user-generated content is essential to understanding their broader societal impact. In this paper, we examine the linguistic complexity of content produced by influential users on Twitter across three globally significant and contested topics: COVID-19, COP26, and the Russia-Ukraine war. By combining multiple measures of textual complexity, we assess how language use varies along four key dimensions: account type, political leaning, content reliability, and sentiment. Our analysis reveals significant differences across all four axes, including variations in language complexity between individuals and organizations, between profiles with sided versus moderate political views, and between those associated with higher versus lower reliability scores. Additionally, profiles producing more negative and offensive content tend to use more complex language, with users sharing similar political stances and reliability levels converging toward a common jargon. Our findings offer new insights into the sociolinguistic dynamics of digital platforms and contribute to a deeper understanding of how language reflects ideological and social structures in online spaces.
| null |
https://arxiv.org/abs/2506.22098v1
|
https://arxiv.org/pdf/2506.22098v1.pdf
| null |
[
"Eleonora Amadori",
"Daniele Cirulli",
"Edoardo Di Martino",
"Jacopo Nudo",
"Maria Sahakyan",
"Emanuele Sangiorgio",
"Arnaldo Santoro",
"Simon Zollo",
"Alessandro Galeazzi",
"Niccolò Di Marco"
] |
[] | 2025-06-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improve-underwater-object-detection-through
|
2506.23505
| null | null |
Improve Underwater Object Detection through YOLOv12 Architecture and Physics-informed Augmentation
|
Underwater object detection is crucial for autonomous navigation, environmental monitoring, and marine exploration, but it is severely hampered by light attenuation, turbidity, and occlusion. Current methods balance accuracy and computational efficiency, but they have trouble deploying in real-time under low visibility conditions. Through the integration of physics-informed augmentation techniques with the YOLOv12 architecture, this study advances underwater detection. With Residual ELAN blocks to preserve structural features in turbid waters and Area Attention to maintain large receptive fields for occluded objects while reducing computational complexity. Underwater optical properties are addressed by domain-specific augmentations such as turbulence adaptive blurring, biologically grounded occlusion simulation, and spectral HSV transformations for color distortion. Extensive tests on four difficult datasets show state-of-the-art performance, with Brackish data registering 98.30% mAP at 142 FPS. YOLOv12 improves occlusion robustness by 18.9%, small-object recall by 22.4%, and detection precision by up to 7.94% compared to previous models. The crucial role of augmentation strategy is validated by ablation studies. This work offers a precise and effective solution for conservation and underwater robotics applications.
|
The crucial role of augmentation strategy is validated by ablation studies.
|
https://arxiv.org/abs/2506.23505v1
|
https://arxiv.org/pdf/2506.23505v1.pdf
| null |
[
"Tinh Nguyen"
] |
[
"Autonomous Navigation",
"Computational Efficiency",
"object-detection",
"Object Detection"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/signbart-new-approach-with-the-skeleton
|
2506.21592
| null | null |
SignBart -- New approach with the skeleton sequence for Isolated Sign language Recognition
|
Sign language recognition is crucial for individuals with hearing impairments to break communication barriers. However, previous approaches have had to choose between efficiency and accuracy. Such as RNNs, LSTMs, and GCNs, had problems with vanishing gradients and high computational costs. Despite improving performance, transformer-based methods were not commonly used. This study presents a new novel SLR approach that overcomes the challenge of independently extracting meaningful information from the x and y coordinates of skeleton sequences, which traditional models often treat as inseparable. By utilizing an encoder-decoder of BART architecture, the model independently encodes the x and y coordinates, while Cross-Attention ensures their interrelation is maintained. With only 749,888 parameters, the model achieves 96.04% accuracy on the LSA-64 dataset, significantly outperforming previous models with over one million parameters. The model also demonstrates excellent performance and generalization across WLASL and ASL-Citizen datasets. Ablation studies underscore the importance of coordinate projection, normalization, and using multiple skeleton components for boosting model efficacy. This study offers a reliable and effective approach for sign language recognition, with strong potential for enhancing accessibility tools for the deaf and hard of hearing.
|
This study offers a reliable and effective approach for sign language recognition, with strong potential for enhancing accessibility tools for the deaf and hard of hearing.
|
https://arxiv.org/abs/2506.21592v1
|
https://arxiv.org/pdf/2506.21592v1.pdf
| null |
[
"Tinh Nguyen",
"Minh Khue Phan Tran"
] |
[
"Decoder",
"Sign Language Recognition"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).",
"full_name": "BART",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "BART",
"source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",
"source_url": "https://arxiv.org/abs/1910.13461v1"
},
{
"code_snippet_url": null,
"description": "Please enter a description about the method here",
"full_name": "Surrogate Lagrangian Relaxation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Optimization",
"parent": null
},
"name": "SLR",
"source_title": "Enabling Retrain-free Deep Neural Network Pruning using Surrogate Lagrangian Relaxation",
"source_url": "https://arxiv.org/abs/2012.10079v2"
}
] |
https://paperswithcode.com/paper/constructing-non-markovian-decision-process
|
2506.24026
| null | null |
Constructing Non-Markovian Decision Process via History Aggregator
|
In the domain of algorithmic decision-making, non-Markovian dynamics manifest as a significant impediment, especially for paradigms such as Reinforcement Learning (RL), thereby exerting far-reaching consequences on the advancement and effectiveness of the associated systems. Nevertheless, the existing benchmarks are deficient in comprehensively assessing the capacity of decision algorithms to handle non-Markovian dynamics. To address this deficiency, we have devised a generalized methodology grounded in category theory. Notably, we established the category of Markov Decision Processes (MDP) and the category of non-Markovian Decision Processes (NMDP), and proved the equivalence relationship between them. This theoretical foundation provides a novel perspective for understanding and addressing non-Markovian dynamics. We further introduced non-Markovianity into decision-making problem settings via the History Aggregator for State (HAS). With HAS, we can precisely control the state dependency structure of decision-making problems in the time series. Our analysis demonstrates the effectiveness of our method in representing a broad range of non-Markovian dynamics. This approach facilitates a more rigorous and flexible evaluation of decision algorithms by testing them in problem settings where non-Markovian dynamics are explicitly constructed.
|
In the domain of algorithmic decision-making, non-Markovian dynamics manifest as a significant impediment, especially for paradigms such as Reinforcement Learning (RL), thereby exerting far-reaching consequences on the advancement and effectiveness of the associated systems.
|
https://arxiv.org/abs/2506.24026v1
|
https://arxiv.org/pdf/2506.24026v1.pdf
| null |
[
"Yongyi Wang",
"Wenxin Li"
] |
[
"Decision Making",
"Reinforcement Learning (RL)"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/volumetricsmpl-a-neural-volumetric-body-model
|
2506.23236
| null | null |
VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions
|
Parametric human body models play a crucial role in computer graphics and vision, enabling applications ranging from human motion analysis to understanding human-environment interactions. Traditionally, these models use surface meshes, which pose challenges in efficiently handling interactions with other geometric entities, such as objects and scenes, typically represented as meshes or point clouds. To address this limitation, recent research has explored volumetric neural implicit body models. However, existing works are either insufficiently robust for complex human articulations or impose high computational and memory costs, limiting their widespread use. To this end, we introduce VolumetricSMPL, a neural volumetric body model that leverages Neural Blend Weights (NBW) to generate compact, yet efficient MLP decoders. Unlike prior approaches that rely on large MLPs, NBW dynamically blends a small set of learned weight matrices using predicted shape- and pose-dependent coefficients, significantly improving computational efficiency while preserving expressiveness. VolumetricSMPL outperforms prior volumetric occupancy model COAP with 10x faster inference, 6x lower GPU memory usage, enhanced accuracy, and a Signed Distance Function (SDF) for efficient and differentiable contact modeling. We demonstrate VolumetricSMPL's strengths across four challenging tasks: (1) reconstructing human-object interactions from in-the-wild images, (2) recovering human meshes in 3D scenes from egocentric views, (3) scene-constrained motion synthesis, and (4) resolving self-intersections. Our results highlight its broad applicability and significant performance and efficiency gains.
|
Parametric human body models play a crucial role in computer graphics and vision, enabling applications ranging from human motion analysis to understanding human-environment interactions.
|
https://arxiv.org/abs/2506.23236v1
|
https://arxiv.org/pdf/2506.23236v1.pdf
| null |
[
"Marko Mihajlovic",
"Siwei Zhang",
"Gen Li",
"Kaifeng Zhao",
"Lea Müller",
"Siyu Tang"
] |
[
"Computational Efficiency",
"GPU",
"Human-Object Interaction Detection",
"Motion Synthesis"
] | 2025-06-29T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/memfof-high-resolution-training-for-memory
|
2506.23151
| null | null |
MEMFOF: High-Resolution Training for Memory-Efficient Multi-Frame Optical Flow Estimation
|
Recent advances in optical flow estimation have prioritized accuracy at the cost of growing GPU memory consumption, particularly for high-resolution (FullHD) inputs. We introduce MEMFOF, a memory-efficient multi-frame optical flow method that identifies a favorable trade-off between multi-frame estimation and GPU memory usage. Notably, MEMFOF requires only 2.09 GB of GPU memory at runtime for 1080p inputs, and 28.5 GB during training, which uniquely positions our method to be trained at native 1080p without the need for cropping or downsampling. We systematically revisit design choices from RAFT-like architectures, integrating reduced correlation volumes and high-resolution training protocols alongside multi-frame estimation, to achieve state-of-the-art performance across multiple benchmarks while substantially reducing memory overhead. Our method outperforms more resource-intensive alternatives in both accuracy and runtime efficiency, validating its robustness for flow estimation at high resolutions. At the time of submission, our method ranks first on the Spring benchmark with a 1-pixel (1px) outlier rate of 3.289, leads Sintel (clean) with an endpoint error (EPE) of 0.963, and achieves the best Fl-all error on KITTI-2015 at 2.94%. The code is available at https://github.com/msu-video-group/memfof.
|
Recent advances in optical flow estimation have prioritized accuracy at the cost of growing GPU memory consumption, particularly for high-resolution (FullHD) inputs.
|
https://arxiv.org/abs/2506.23151v1
|
https://arxiv.org/pdf/2506.23151v1.pdf
| null |
[
"Vladislav Bargatin",
"Egor Chistov",
"Alexander Yakovenko",
"Dmitriy Vatolin"
] |
[
"GPU",
"Optical Flow Estimation"
] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/bridging-neural-ode-and-resnet-a-formal-error
|
2506.03227
| null | null |
Bridging Neural ODE and ResNet: A Formal Error Bound for Safety Verification
|
A neural ordinary differential equation (neural ODE) is a machine learning model that is commonly described as a continuous depth generalization of a residual network (ResNet) with a single residual block, or conversely, the ResNet can be seen as the Euler discretization of the neural ODE. These two models are therefore strongly related in a way that the behaviors of either model are considered to be an approximation of the behaviors of the other. In this work, we establish a more formal relationship between these two models by bounding the approximation error between two such related models. The obtained error bound then allows us to use one of the models as a verification proxy for the other, without running the verification tools twice: if the reachable output set expanded by the error bound satisfies a safety property on one of the models, this safety property is then guaranteed to be also satisfied on the other model. This feature is fully reversible, and the initial safety verification can be run indifferently on either of the two models. This novel approach is illustrated on a numerical example of a fixed-point attractor system modeled as a neural ODE.
|
A neural ordinary differential equation (neural ODE) is a machine learning model that is commonly described as a continuous depth generalization of a residual network (ResNet) with a single residual block, or conversely, the ResNet can be seen as the Euler discretization of the neural ODE.
|
https://arxiv.org/abs/2506.03227v1
|
https://arxiv.org/pdf/2506.03227v1.pdf
| null |
[
"Abdelrahman Sayed Sayed",
"Pierre-Jean Meyer",
"Mohamed Ghazel"
] |
[] | 2025-06-03T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/asymmetric-dual-self-distillation-for-3d-self
|
2506.21724
| null | null |
Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning
|
Learning semantically meaningful representations from unstructured 3D point clouds remains a central challenge in computer vision, especially in the absence of large-scale labeled datasets. While masked point modeling (MPM) is widely used in self-supervised 3D learning, its reconstruction-based objective can limit its ability to capture high-level semantics. We propose AsymDSD, an Asymmetric Dual Self-Distillation framework that unifies masked modeling and invariance learning through prediction in the latent space rather than the input space. AsymDSD builds on a joint embedding architecture and introduces several key design choices: an efficient asymmetric setup, disabling attention between masked queries to prevent shape leakage, multi-mask sampling, and a point cloud adaptation of multi-crop. AsymDSD achieves state-of-the-art results on ScanObjectNN (90.53%) and further improves to 93.72% when pretrained on 930k shapes, surpassing prior methods.
|
Learning semantically meaningful representations from unstructured 3D point clouds remains a central challenge in computer vision, especially in the absence of large-scale labeled datasets.
|
https://arxiv.org/abs/2506.21724v1
|
https://arxiv.org/pdf/2506.21724v1.pdf
| null |
[
"Remco F. Leijenaar",
"Hamidreza Kasaei"
] |
[
"3D Point Cloud Classification",
"Representation Learning"
] | 2025-06-26T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/modulated-diffusion-accelerating-generative
|
2506.22463
| null | null |
Modulated Diffusion: Accelerating Generative Modeling with Modulated Quantization
|
Diffusion models have emerged as powerful generative models, but their high computation cost in iterative sampling remains a significant bottleneck. In this work, we present an in-depth and insightful study of state-of-the-art acceleration techniques for diffusion models, including caching and quantization, revealing their limitations in computation error and generation quality. To break these limits, this work introduces Modulated Diffusion (MoDiff), an innovative, rigorous, and principled framework that accelerates generative modeling through modulated quantization and error compensation. MoDiff not only inherents the advantages of existing caching and quantization methods but also serves as a general framework to accelerate all diffusion models. The advantages of MoDiff are supported by solid theoretical insight and analysis. In addition, extensive experiments on CIFAR-10 and LSUN demonstrate that MoDiff significant reduces activation quantization from 8 bits to 3 bits without performance degradation in post-training quantization (PTQ). Our code implementation is available at https://github.com/WeizhiGao/MoDiff.
|
In this work, we present an in-depth and insightful study of state-of-the-art acceleration techniques for diffusion models, including caching and quantization, revealing their limitations in computation error and generation quality.
|
https://arxiv.org/abs/2506.22463v1
|
https://arxiv.org/pdf/2506.22463v1.pdf
| null |
[
"Weizhi Gao",
"Zhichao Hou",
"Junqi Yin",
"Feiyi Wang",
"Linyu Peng",
"Xiaorui Liu"
] |
[
"Quantization"
] | 2025-06-18T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/score-based-model-for-low-rank-tensor
|
2506.22295
| null | null |
Score-Based Model for Low-Rank Tensor Recovery
|
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis. Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions. From a probabilistic perspective, these can be viewed as using Dirac delta distributions to model the relationships between shared factors and the low-rank tensor. However, such prior knowledge is rarely available in practical scenarios, particularly regarding the optimal rank structure and contraction rules. The optimization procedures based on fixed contraction rules are complex, and approximations made during these processes often lead to accuracy loss. To address this issue, we propose a score-based model that eliminates the need for predefined structural or distributional assumptions, enabling the learning of compatibility between tensors and shared factors. Specifically, a neural network is designed to learn the energy function, which is optimized via score matching to capture the gradient of the joint log-probability of tensor entries and shared factors. Our method allows for modeling structures and distributions beyond the Dirac delta assumption. Moreover, integrating the block coordinate descent (BCD) algorithm with the proposed smooth regularization enables the model to perform both tensor completion and denoising. Experimental results demonstrate significant performance improvements across various tensor types, including sparse and continuous-time tensors, as well as visual data.
| null |
https://arxiv.org/abs/2506.22295v1
|
https://arxiv.org/pdf/2506.22295v1.pdf
| null |
[
"Zhengyun Cheng",
"Changhao Wang",
"Guanwen Zhang",
"Yi Xu",
"Wei Zhou",
"Xiangyang Ji"
] |
[
"Denoising"
] | 2025-06-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "TuckER",
"full_name": "TuckER",
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "\n\ngraph embeddings, can be homogeneous graph or heterogeneous graph",
"name": "Graph Embeddings",
"parent": null
},
"name": "TuckER",
"source_title": "TuckER: Tensor Factorization for Knowledge Graph Completion",
"source_url": "https://arxiv.org/abs/1901.09590v2"
}
] |
https://paperswithcode.com/paper/studying-and-improving-graph-neural-network
|
2506.15709
| null | null |
Studying and Improving Graph Neural Network-based Motif Estimation
|
Graph Neural Networks (GNNs) are a predominant method for graph representation learning. However, beyond subgraph frequency estimation, their application to network motif significance-profile (SP) prediction remains under-explored, with no established benchmarks in the literature. We propose to address this problem, framing SP estimation as a task independent of subgraph frequency estimation. Our approach shifts from frequency counting to direct SP estimation and modulates the problem as multitarget regression. The reformulation is optimised for interpretability, stability and scalability on large graphs. We validate our method using a large synthetic dataset and further test it on real-world graphs. Our experiments reveal that 1-WL limited models struggle to make precise estimations of SPs. However, they can generalise to approximate the graph generation processes of networks by comparing their predicted SP with the ones originating from synthetic generators. This first study on GNN-based motif estimation also hints at how using direct SP estimation can help go past the theoretical limitations that motif estimation faces when performed through subgraph counting.
| null |
https://arxiv.org/abs/2506.15709v1
|
https://arxiv.org/pdf/2506.15709v1.pdf
| null |
[
"Pedro C. Vieira",
"Miguel E. P. Silva",
"Pedro Manuel Pinto Ribeiro"
] |
[
"Graph Generation",
"Graph Neural Network",
"Graph Representation Learning",
"Representation Learning",
"Subgraph Counting"
] | 2025-05-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/surgrid-controllable-surgical-simulation-via
|
2502.07945
| null | null |
SurGrID: Controllable Surgical Simulation via Scene Graph to Image Diffusion
|
Surgical simulation offers a promising addition to conventional surgical training. However, available simulation tools lack photorealism and rely on hardcoded behaviour. Denoising Diffusion Models are a promising alternative for high-fidelity image synthesis, but existing state-of-the-art conditioning methods fall short in providing precise control or interactivity over the generated scenes. We introduce SurGrID, a Scene Graph to Image Diffusion Model, allowing for controllable surgical scene synthesis by leveraging Scene Graphs. These graphs encode a surgical scene's components' spatial and semantic information, which are then translated into an intermediate representation using our novel pre-training step that explicitly captures local and global information. Our proposed method improves the fidelity of generated images and their coherence with the graph input over the state-of-the-art. Further, we demonstrate the simulation's realism and controllability in a user assessment study involving clinical experts. Scene Graphs can be effectively used for precise and interactive conditioning of Denoising Diffusion Models for simulating surgical scenes, enabling high fidelity and interactive control over the generated content.
| null |
https://arxiv.org/abs/2502.07945v1
|
https://arxiv.org/pdf/2502.07945v1.pdf
| null |
[
"Yannik Frisch",
"Ssharvien Kumar Sivakumar",
"Çağhan Köksal",
"Elsa Böhm",
"Felix Wagner",
"Adrian Gericke",
"Ghazal Ghazaei",
"Anirban Mukhopadhyay"
] |
[
"Denoising",
"Image Generation"
] | 2025-02-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
}
] |
https://paperswithcode.com/paper/conquering-the-retina-bringing-visual-in
|
2506.15200
| null | null |
Conquering the Retina: Bringing Visual in-Context Learning to OCT
|
Recent advancements in medical image analysis have led to the development of highly specialized models tailored to specific clinical tasks. These models have demonstrated exceptional performance and remain a crucial research direction. Yet, their applicability is limited to predefined tasks, requiring expertise and extensive resources for development and adaptation. In contrast, generalist models offer a different form of utility: allowing medical practitioners to define tasks on the fly without the need for task-specific model development. In this work, we explore how to train generalist models for the domain of retinal optical coherence tomography using visual in-context learning (VICL), i.e., training models to generalize across tasks based on a few examples provided at inference time. To facilitate rigorous assessment, we propose a broad evaluation protocol tailored to VICL in OCT. We extensively evaluate a state-of-the-art medical VICL approach on multiple retinal OCT datasets, establishing a first baseline to highlight the potential and current limitations of in-context learning for OCT. To foster further research and practical adoption, we openly release our code.
|
Recent advancements in medical image analysis have led to the development of highly specialized models tailored to specific clinical tasks.
|
https://arxiv.org/abs/2506.15200v1
|
https://arxiv.org/pdf/2506.15200v1.pdf
| null |
[
"Alessio Negrini",
"Simon Reiß"
] |
[
"In-Context Learning",
"Medical Image Analysis"
] | 2025-06-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unmix-nerf-spectral-unmixing-meets-neural
|
2506.21884
| null | null |
UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields
|
Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties. This limitation restricts accurate material perception, which is crucial for robotics, augmented reality, simulation, and other applications. We introduce UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling joint hyperspectral novel view synthesis and unsupervised material segmentation. Our method models spectral reflectance via diffuse and specular components, where a learned dictionary of global endmembers represents pure material signatures, and per-point abundances capture their distribution. For material segmentation, we use spectral signature predictions along learned endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF enables scene editing by modifying learned endmember dictionaries for flexible material-based appearance manipulation. Extensive experiments validate our approach, demonstrating superior spectral reconstruction and material segmentation to existing methods. Project page: https://www.factral.co/UnMix-NeRF.
|
Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties.
|
https://arxiv.org/abs/2506.21884v1
|
https://arxiv.org/pdf/2506.21884v1.pdf
| null |
[
"Fabian Perez",
"Sara Rojas",
"Carlos Hinojosa",
"Hoover Rueda-Chacón",
"Bernard Ghanem"
] |
[
"Hyperspectral Unmixing",
"Material Segmentation",
"NeRF",
"Novel View Synthesis",
"Segmentation",
"Spectral Reconstruction"
] | 2025-06-27T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/revisiting-z-transform-laplace-inversion-to
|
2506.23242
| null | null |
Revisiting Z Transform Laplace Inversion: To Correct flaws in Signal and System Theory
|
This paper revisits the classical formulation of the Z-transform and its relationship to the inverse Laplace transform (L-1), originally developed by Ragazzini in sampled-data theory. It identifies a longstanding mathematical oversight in standard derivations, which typically neglect the contribution from the infinite arc in the complex plane during inverse Laplace evaluation. This omission leads to inconsistencies, especially at discontinuities such as t = 0. By incorporating the full Bromwich contour, including all boundary contributions, we restore internal consistency between L-1 and the Z-transform, aligning the corrected L-1 with results from Discrete-Time Fourier Transform (DTFT) aliasing theory. Consequently, this necessitates a structural revision of the Z-transform, inverse Laplace transform, and the behavior of the Heaviside step function at discontinuities, providing a more accurate foundation for modeling and analysis of sampled-data systems.
| null |
https://arxiv.org/abs/2506.23242v1
|
https://arxiv.org/pdf/2506.23242v1.pdf
| null |
[
"Yuxin Yang",
"Hang Zhou",
"Chaojie Li",
"Xin Li",
"Yingyi Yan",
"Mingyang Zheng"
] |
[
"ARC"
] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/textrm-ode-t-left-textrm-ode-l-right
|
2506.21714
| null | null |
$\textrm{ODE}_t \left(\textrm{ODE}_l \right)$: Shortcutting the Time and Length in Diffusion and Flow Models for Faster Sampling
|
Recently, continuous normalizing flows (CNFs) and diffusion models (DMs) have been studied using the unified theoretical framework. Although such models can generate high-quality data points from a noise distribution, the sampling demands multiple iterations to solve an ordinary differential equation (ODE) with high computational complexity. Most existing methods focus on reducing the number of time steps during the sampling process to improve efficiency. In this work, we explore a complementary direction in which the quality-complexity tradeoff can be dynamically controlled in terms of time steps and in the length of the neural network. We achieve this by rewiring the blocks in the transformer-based architecture to solve an inner discretized ODE w.r.t. its length. Then, we employ time- and length-wise consistency terms during flow matching training, and as a result, the sampling can be performed with an arbitrary number of time steps and transformer blocks. Unlike others, our $\textrm{ODE}_t \left(\textrm{ODE}_l \right)$ approach is solver-agnostic in time dimension and decreases both latency and memory usage. Compared to the previous state of the art, image generation experiments on CelebA-HQ and ImageNet show a latency reduction of up to $3\times$ in the most efficient sampling mode, and a FID score improvement of up to $3.5$ points for high-quality sampling. We release our code and model weights with fully reproducible experiments.
|
In this work, we explore a complementary direction in which the quality-complexity tradeoff can be dynamically controlled in terms of time steps and in the length of the neural network.
|
https://arxiv.org/abs/2506.21714v1
|
https://arxiv.org/pdf/2506.21714v1.pdf
| null |
[
"Denis Gudovskiy",
"Wenzhao Zheng",
"Tomoyuki Okuno",
"Yohei Nakata",
"Kurt Keutzer"
] |
[
"Image Generation"
] | 2025-06-26T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/ex4sperans/variational-inference-with-normalizing-flows/blob/922b569f851e02fa74700cd0754fe2ef5c1f3180/flow.py#L9",
"description": "**Normalizing Flows** are a method for constructing complex distributions by transforming a\r\nprobability density through a series of invertible mappings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.\r\n\r\nIn the case of finite flows, the basic rule for the transformation of densities considers an invertible, smooth mapping $f : \\mathbb{R}^{d} \\rightarrow \\mathbb{R}^{d}$ with inverse $f^{-1} = g$, i.e. the composition $g \\cdot f\\left(z\\right) = z$. If we use this mapping to transform a random variable $z$ with distribution $q\\left(z\\right)$, the resulting random variable $z' = f\\left(z\\right)$ has a distribution:\r\n\r\n$$ q\\left(\\mathbf{z}'\\right) = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}^{-1}}{\\delta{\\mathbf{z'}}}\\bigr\\vert = q\\left(\\mathbf{z}\\right)\\bigl\\vert{\\text{det}}\\frac{\\delta{f}}{\\delta{\\mathbf{z}}}\\bigr\\vert ^{-1} $$\r\n\f\r\nwhere the last equality can be seen by applying the chain rule (inverse function theorem) and is a property of Jacobians of invertible functions. We can construct arbitrarily complex densities by composing several simple maps and successively applying the above equation. The density $q\\_{K}\\left(\\mathbf{z}\\right)$ obtained by successively transforming a random variable $z\\_{0}$ with distribution $q\\_{0}$ through a chain of $K$ transformations $f\\_{k}$ is:\r\n\r\n$$ z\\_{K} = f\\_{K} \\cdot \\dots \\cdot f\\_{2} \\cdot f\\_{1}\\left(z\\_{0}\\right) $$\r\n\r\n$$ \\ln{q}\\_{K}\\left(z\\_{K}\\right) = \\ln{q}\\_{0}\\left(z\\_{0}\\right) − \\sum^{K}\\_{k=1}\\ln\\vert\\det\\frac{\\delta{f\\_{k}}}{\\delta{\\mathbf{z\\_{k-1}}}}\\vert $$\r\n\f\r\nThe path traversed by the random variables $z\\_{k} = f\\_{k}\\left(z\\_{k-1}\\right)$ with initial distribution $q\\_{0}\\left(z\\_{0}\\right)$ is called the flow and the path formed by the successive distributions $q\\_{k}$ is a normalizing flow.",
"full_name": "Normalizing Flows",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Distribution Approximation** methods aim to approximate a complex distribution. Below you can find a continuously updating list of distribution approximation methods.",
"name": "Distribution Approximation",
"parent": null
},
"name": "Normalizing Flows",
"source_title": "Variational Inference with Normalizing Flows",
"source_url": "http://arxiv.org/abs/1505.05770v6"
},
{
"code_snippet_url": null,
"description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).",
"full_name": "Diffusion",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "",
"name": "Image Generation Models",
"parent": null
},
"name": "Diffusion",
"source_title": "Denoising Diffusion Probabilistic Models",
"source_url": "https://arxiv.org/abs/2006.11239v2"
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/computational-detection-of-intertextual
|
2506.24117
| null | null |
Computational Detection of Intertextual Parallels in Biblical Hebrew: A Benchmark Study Using Transformer-Based Language Models
|
Identifying parallel passages in biblical Hebrew is foundational in biblical scholarship for uncovering intertextual relationships. Traditional methods rely on manual comparison, which is labor-intensive and prone to human error. This study evaluates the potential of pre-trained transformer-based language models, including E5, AlephBERT, MPNet, and LaBSE, for detecting textual parallels in the Hebrew Bible. Focusing on known parallels between the books of Samuel/Kings and Chronicles, I assessed each model's capability to generate word embeddings that delineate parallel from non-parallel passages. Utilizing cosine similarity and Wasserstein Distance measures, I found that E5 and AlephBERT show significant promise, with E5 excelling in parallel detection and AlephBERT demonstrating stronger non-parallel differentiation. These findings indicate that pre-trained models can enhance the efficiency and accuracy of detecting intertextual parallels in ancient texts, suggesting broader applications for ancient language studies.
| null |
https://arxiv.org/abs/2506.24117v1
|
https://arxiv.org/pdf/2506.24117v1.pdf
| null |
[
"David M. Smiley"
] |
[
"Word Embeddings"
] | 2025-06-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "**MPNet** is a pre-training method for language models that combines masked language modeling (MLM) and permuted language modeling (PLM) in one view. It takes the dependency among the predicted tokens into consideration through permuted language modeling and thus avoids the issue of [BERT](https://paperswithcode.com/method/bert). On the other hand, it takes position information of all tokens as input to make the model see the position information of all the tokens and thus alleviates the position discrepancy of [XLNet](https://paperswithcode.com/method/xlnet).\r\n\r\nThe training objective of MPNet is:\r\n\r\n$$ \\mathbb{E}\\_{z\\in{\\mathcal{Z}\\_{n}}} \\sum^{n}\\_{t=c+1}\\log{P}\\left(x\\_{z\\_{t}}\\mid{x\\_{z\\_{<t}}}, M\\_{z\\_{{>}{c}}}; \\theta\\right) $$\r\n\r\nAs can be seen, MPNet conditions on ${x\\_{z\\_{<t}}}$ (the tokens preceding the current predicted token $x\\_{z\\_{t}}$) rather than only the non-predicted tokens ${x\\_{z\\_{<=c}}}$ in MLM; comparing with PLM, MPNet takes more information (i.e., the mask symbol $[M]$ in position $z\\_{>c}$) as inputs. Although the objective seems simple, it is challenging to implement the model efficiently. For details, see the paper.",
"full_name": "MPNet",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Language Model Pre-Training",
"parent": null
},
"name": "MPNet",
"source_title": "MPNet: Masked and Permuted Pre-training for Language Understanding",
"source_url": "https://arxiv.org/abs/2004.09297v2"
}
] |
https://paperswithcode.com/paper/positioning-ai-tools-to-support-online-harm
|
2506.22941
| null | null |
Positioning AI Tools to Support Online Harm Reduction Practice: Applications and Design Directions
|
Access to accurate and actionable harm reduction information can directly impact the health outcomes of People Who Use Drugs (PWUD), yet existing online channels often fail to meet their diverse and dynamic needs due to limitations in adaptability, accessibility, and the pervasive impact of stigma. Large Language Models (LLMs) present a novel opportunity to enhance information provision, but their application in such a high-stakes domain is under-explored and presents socio-technical challenges. This paper investigates how LLMs can be responsibly designed to support the information needs of PWUD. Through a qualitative workshop involving diverse stakeholder groups (academics, harm reduction practitioners, and an online community moderator), we explored LLM capabilities, identified potential use cases, and delineated core design considerations. Our findings reveal that while LLMs can address some existing information barriers (e.g., by offering responsive, multilingual, and potentially less stigmatising interactions), their effectiveness is contingent upon overcoming challenges related to ethical alignment with harm reduction principles, nuanced contextual understanding, effective communication, and clearly defined operational boundaries. We articulate design pathways emphasising collaborative co-design with experts and PWUD to develop LLM systems that are helpful, safe, and responsibly governed. This work contributes empirically grounded insights and actionable design considerations for the responsible development of LLMs as supportive tools within the harm reduction ecosystem.
| null |
https://arxiv.org/abs/2506.22941v1
|
https://arxiv.org/pdf/2506.22941v1.pdf
| null |
[
"Kaixuan Wang",
"Jason T. Jacques",
"Chenxin Diao"
] |
[] | 2025-06-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/imbalance-prime-sieving-every-prime-gap-is-a
| null | null | null |
Imbalance Prime Sieving: Every Prime Gap Is a Result of a Möbius Imbalance Obstruction
|
We introduce a novel sieve for prime numbers based on detecting topological obstructions in a Möbius-transformed rational metric space. Unlike traditional sieves which rely on divisibility, our method identifies primes as those numbers which contribute new, non-colliding imbalance conjugates. This provides both an exact algorithm for prime enumeration and a new geometric interpretation of prime gaps. This sieve constructs a topological obstruction theory over rational pairs (p, q), from which we observe that every prime gap is a consequence of a collision in this transformed imbalance space. Our empirical results demonstrate that this method precisely filters the prime numbers up to a specified bound, with potential implications for new number-theoretic models and sieving algorithms.
|
We introduce a novel sieve for prime numbers based on detecting topological obstructions in a Möbius-transformed rational metric space.
|
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5334569
|
https://papers.ssrn.com/sol3/Delivery.cfm/5334569.pdf?abstractid=5334569&mirid=1
|
SSRN 2025 7
|
[
"Paul Alexander Bilokon"
] |
[] | 2025-07-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/features-based-embedding-or-feature-grounding
|
2506.22442
| null | null |
Features-based embedding or Feature-grounding
|
In everyday reasoning, when we think about a particular object, we associate it with a unique set of expected properties such as weight, size, or more abstract attributes like density or horsepower. These expectations are shaped by our prior knowledge and the conceptual categories we have formed through experience. This paper investigates how such knowledge-based structured thinking can be reproduced in deep learning models using features based embeddings. Specially, it introduces an specific approach to build feature-grounded embedding, aiming to align shareable representations of operable dictionary with interpretable domain-specific conceptual features.
| null |
https://arxiv.org/abs/2506.22442v1
|
https://arxiv.org/pdf/2506.22442v1.pdf
| null |
[
"Piotr Makarevich"
] |
[] | 2025-06-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.",
"full_name": "ALIGN",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)",
"name": "Vision and Language Pre-Trained Models",
"parent": null
},
"name": "ALIGN",
"source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision",
"source_url": "https://arxiv.org/abs/2102.05918v2"
},
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/leveraging-gait-patterns-as-biomarkers-an
|
2504.03894
| null | null |
Leveraging Gait Patterns as Biomarkers: An attention-guided Deep Multiple Instance Learning Network for Scoliosis Classification
|
Scoliosis is a spinal curvature disorder that is difficult to detect early and can compress the chest cavity, impacting respiratory function and cardiac health. Especially for adolescents, delayed detection and treatment result in worsening compression. Traditional scoliosis detection methods heavily rely on clinical expertise, and X-ray imaging poses radiation risks, limiting large-scale early screening. We propose an Attention-Guided Deep Multi-Instance Learning method (Gait-MIL) to effectively capture discriminative features from gait patterns, which is inspired by ScoNet-MT's pioneering use of gait patterns for scoliosis detection. We evaluate our method on the first large-scale dataset based on gait patterns for scoliosis classification. The results demonstrate that our study improves the performance of using gait as a biomarker for scoliosis detection, significantly enhances detection accuracy for the particularly challenging Neutral cases, where subtle indicators are often overlooked. Our Gait-MIL also performs robustly in imbalanced scenarios, making it a promising tool for large-scale scoliosis screening.
| null |
https://arxiv.org/abs/2504.03894v1
|
https://arxiv.org/pdf/2504.03894v1.pdf
| null |
[
"Haiqing Li",
"Yuzhi Guo",
"Feng Jiang",
"Qifeng Zhou",
"Hehuan Ma",
"Junzhou Huang"
] |
[
"Multiple Instance Learning"
] | 2025-04-04T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.