paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
float64 | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/a-survey-on-mllm-based-visually-rich-document
|
2507.09861
| null | null |
A Survey on MLLM-based Visually Rich Document Understanding: Methods, Challenges, and Emerging Trends
|
Visually-Rich Document Understanding (VRDU) has emerged as a critical field, driven by the need to automatically process documents containing complex visual, textual, and layout information. Recently, Multimodal Large Language Models (MLLMs) have shown remarkable potential in this domain, leveraging both Optical Character Recognition (OCR)-dependent and OCR-free frameworks to extract and interpret information in document images. This survey reviews recent advancements in MLLM-based VRDU, highlighting three core components: (1) methods for encoding and fusing textual, visual, and layout features; (2) training paradigms, including pretraining strategies, instruction-response tuning, and the trainability of different model modules; and (3) datasets utilized for pretraining, instruction-tuning, and supervised fine-tuning. Finally, we discuss the challenges and opportunities in this evolving field and propose future directions to advance the efficiency, generalizability, and robustness of VRDU systems.
| null |
https://arxiv.org/abs/2507.09861v1
|
https://arxiv.org/pdf/2507.09861v1.pdf
| null |
[
"Yihao Ding",
"Siwen Luo",
"Yue Dai",
"Yanbei Jiang",
"Zechuan Li",
"Geoffrey Martin",
"Yifan Peng"
] |
[
"document understanding",
"Optical Character Recognition",
"Optical Character Recognition (OCR)"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-physics-to-foundation-models-a-review-of
|
2507.09081
| null | null |
From Physics to Foundation Models: A Review of AI-Driven Quantitative Remote Sensing Inversion
|
Quantitative remote sensing inversion aims to estimate continuous surface variables-such as biomass, vegetation indices, and evapotranspiration-from satellite observations, supporting applications in ecosystem monitoring, carbon accounting, and land management. With the evolution of remote sensing systems and artificial intelligence, traditional physics-based paradigms are giving way to data-driven and foundation model (FM)-based approaches. This paper systematically reviews the methodological evolution of inversion techniques, from physical models (e.g., PROSPECT, SCOPE, DART) to machine learning methods (e.g., deep learning, multimodal fusion), and further to foundation models (e.g., SatMAE, GFM, mmEarth). We compare the modeling assumptions, application scenarios, and limitations of each paradigm, with emphasis on recent FM advances in self-supervised pretraining, multi-modal integration, and cross-task adaptation. We also highlight persistent challenges in physical interpretability, domain generalization, limited supervision, and uncertainty quantification. Finally, we envision the development of next-generation foundation models for remote sensing inversion, emphasizing unified modeling capacity, cross-domain generalization, and physical interpretability.
| null |
https://arxiv.org/abs/2507.09081v1
|
https://arxiv.org/pdf/2507.09081v1.pdf
| null |
[
"Zhenyu Yu",
"Mohd Yamani Idna Idris",
"Hua Wang",
"Pei Wang",
"Junyi Chen",
"Kun Wang"
] |
[
"Domain Generalization",
"Uncertainty Quantification"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/review-of-feed-forward-3d-reconstruction-from
|
2507.08448
| null | null |
Review of Feed-forward 3D Reconstruction: From DUSt3R to VGGT
|
3D reconstruction, which aims to recover the dense three-dimensional structure of a scene, is a cornerstone technology for numerous applications, including augmented/virtual reality, autonomous driving, and robotics. While traditional pipelines like Structure from Motion (SfM) and Multi-View Stereo (MVS) achieve high precision through iterative optimization, they are limited by complex workflows, high computational cost, and poor robustness in challenging scenarios like texture-less regions. Recently, deep learning has catalyzed a paradigm shift in 3D reconstruction. A new family of models, exemplified by DUSt3R, has pioneered a feed-forward approach. These models employ a unified deep network to jointly infer camera poses and dense geometry directly from an Unconstrained set of images in a single forward pass. This survey provides a systematic review of this emerging domain. We begin by dissecting the technical framework of these feed-forward models, including their Transformer-based correspondence modeling, joint pose and geometry regression mechanisms, and strategies for scaling from two-view to multi-view scenarios. To highlight the disruptive nature of this new paradigm, we contrast it with both traditional pipelines and earlier learning-based methods like MVSNet. Furthermore, we provide an overview of relevant datasets and evaluation metrics. Finally, we discuss the technology's broad application prospects and identify key future challenges and opportunities, such as model accuracy and scalability, and handling dynamic scenes.
| null |
https://arxiv.org/abs/2507.08448v1
|
https://arxiv.org/pdf/2507.08448v1.pdf
| null |
[
"Wei zhang",
"Yihang Wu",
"Songhua Li",
"Wenjie Ma",
"Xin Ma",
"Qiang Li",
"Qi Wang"
] |
[
"3D Reconstruction",
"Autonomous Driving"
] | 2025-07-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/what-demands-attention-in-urban-street-scenes
|
2507.06513
| null | null |
What Demands Attention in Urban Street Scenes? From Scene Understanding towards Road Safety: A Survey of Vision-driven Datasets and Studies
|
Advances in vision-based sensors and computer vision algorithms have significantly improved the analysis and understanding of traffic scenarios. To facilitate the use of these improvements for road safety, this survey systematically categorizes the critical elements that demand attention in traffic scenarios and comprehensively analyzes available vision-driven tasks and datasets. Compared to existing surveys that focus on isolated domains, our taxonomy categorizes attention-worthy traffic entities into two main groups that are anomalies and normal but critical entities, integrating ten categories and twenty subclasses. It establishes connections between inherently related fields and provides a unified analytical framework. Our survey highlights the analysis of 35 vision-driven tasks and comprehensive examinations and visualizations of 73 available datasets based on the proposed taxonomy. The cross-domain investigation covers the pros and cons of each benchmark with the aim of providing information on standards unification and resource optimization. Our article concludes with a systematic discussion of the existing weaknesses, underlining the potential effects and promising solutions from various perspectives. The integrated taxonomy, comprehensive analysis, and recapitulatory tables serve as valuable contributions to this rapidly evolving field by providing researchers with a holistic overview, guiding strategic resource selection, and highlighting critical research gaps.
| null |
https://arxiv.org/abs/2507.06513v2
|
https://arxiv.org/pdf/2507.06513v2.pdf
| null |
[
"Yaoqi Huang",
"Julie Stephany Berrio",
"Mao Shan",
"Stewart Worrall"
] |
[
"Scene Understanding",
"Survey"
] | 2025-07-09T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/unsupervised-methods-for-video-quality
|
2507.08375
| null | null |
Unsupervised Methods for Video Quality Improvement: A Survey of Restoration and Enhancement Techniques
|
Video restoration and enhancement are critical not only for improving visual quality, but also as essential pre-processing steps to boost the performance of a wide range of downstream computer vision tasks. This survey presents a comprehensive review of video restoration and enhancement techniques with a particular focus on unsupervised approaches. We begin by outlining the most common video degradations and their underlying causes, followed by a review of early conventional and deep learning methods-based, highlighting their strengths and limitations. We then present an in-depth overview of unsupervised methods, categorise by their fundamental approaches, including domain translation, self-supervision signal design and blind spot or noise-based methods. We also provide a categorization of loss functions employed in unsupervised video restoration and enhancement, and discuss the role of paired synthetic datasets in enabling objective evaluation. Finally, we identify key challenges and outline promising directions for future research in this field.
| null |
https://arxiv.org/abs/2507.08375v1
|
https://arxiv.org/pdf/2507.08375v1.pdf
| null |
[
"Alexandra Malyugina",
"Yini Li",
"Joanne Lin",
"Nantheera Anantrasirichai"
] |
[
"Video Restoration"
] | 2025-07-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/a-survey-on-interpretability-in-visual
|
2507.11099
| null | null |
A Survey on Interpretability in Visual Recognition
|
In recent years, visual recognition methods have advanced significantly, finding applications across diverse fields. While researchers seek to understand the mechanisms behind the success of these models, there is also a growing impetus to deploy them in critical areas like autonomous driving and medical diagnostics to better diagnose failures, which promotes the development of interpretability research. This paper systematically reviews existing research on the interpretability of visual recognition models and proposes a taxonomy of methods from a human-centered perspective. The proposed taxonomy categorizes interpretable recognition methods based on Intent, Object, Presentation, and Methodology, thereby establishing a systematic and coherent set of grouping criteria for these XAI methods. Additionally, we summarize the requirements for evaluation metrics and explore new opportunities enabled by recent technologies, such as large multimodal models. We aim to organize existing research in this domain and inspire future investigations into the interpretability of visual recognition models.
| null |
https://arxiv.org/abs/2507.11099v1
|
https://arxiv.org/pdf/2507.11099v1.pdf
| null |
[
"Qiyang Wan",
"Chengzhi Gao",
"Ruiping Wang",
"Xilin Chen"
] |
[
"Autonomous Driving",
"Survey"
] | 2025-07-15T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/making-language-model-a-hierarchical
|
2507.12930
| null | null |
Making Language Model a Hierarchical Classifier and Generator
|
Decoder-only language models, such as GPT and LLaMA, generally decode on the last layer. Motivated by human's hierarchical thinking capability, we propose that a hierarchical decoder architecture could be built with different layers decoding texts simultaneously. Due to limited time and computationally resources, we choose to adapt a pretrained language model into this form of hierarchical decoder. Language heads of the last layer are copied to different selected intermediate layers, and fine-tuned with different task inputs. By thorough experiments, we validate that these selective intermediate layers could be adapted to speak meaningful and reasonable contents, and this paradigm of hierarchical decoder can obtain state-of-the-art performances on multiple tasks such as hierarchical text classification, classification-guided generation, and hierarchical text generation. This study suggests the possibility of a generalized hierarchical reasoner, pretraining from scratch.
|
Language heads of the last layer are copied to different selected intermediate layers, and fine-tuned with different task inputs.
|
https://arxiv.org/abs/2507.12930v1
|
https://arxiv.org/pdf/2507.12930v1.pdf
| null |
[
"Yihong Wang",
"Zhonglin Jiang",
"Ningyuan Xi",
"Yue Zhao",
"Qingqing Gu",
"Xiyuan Chen",
"Hao Wu",
"Sheng Xu",
"Hange Zhou",
"Yong Chen",
"Luo Ji"
] |
[
"Decoder",
"Language Modeling",
"Language Modelling",
"model",
"text-classification",
"Text Classification",
"Text Generation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.",
"full_name": "LLaMA",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n",
"name": "Language Models",
"parent": null
},
"name": "LLaMA",
"source_title": "LLaMA: Open and Efficient Foundation Language Models",
"source_url": "https://arxiv.org/abs/2302.13971v1"
},
{
"code_snippet_url": "",
"description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!",
"full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.",
"name": "Activation Functions",
"parent": null
},
"name": "Refunds@Expedia|||How do I get a full refund from Expedia?",
"source_title": "Gaussian Error Linear Units (GELUs)",
"source_url": "https://arxiv.org/abs/1606.08415v5"
},
{
"code_snippet_url": "https://github.com/huggingface/transformers/blob/4dc65591b5c61d75c3ef3a2a883bf1433e08fc45/src/transformers/modeling_tf_bert.py#L271",
"description": "**Attention Dropout** is a type of [dropout](https://paperswithcode.com/method/dropout) used in attention-based architectures, where elements are randomly dropped out of the [softmax](https://paperswithcode.com/method/softmax) in the attention equation. For example, for scaled-dot product attention, we would drop elements from the first term:\r\n\r\n$$ {\\text{Attention}}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^{T}}{\\sqrt{d_k}}\\right)V $$",
"full_name": "Attention Dropout",
"introduced_year": 2018,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Attention Dropout",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Cosine Annealing** is a type of learning rate schedule that has the effect of starting with a large learning rate that is relatively rapidly decreased to a minimum value before being increased rapidly again. The resetting of the learning rate acts like a simulated restart of the learning process and the re-use of good weights as the starting point of the restart is referred to as a \"warm restart\" in contrast to a \"cold restart\" where a new set of small random numbers may be used as a starting point.\r\n\r\n$$\\eta\\_{t} = \\eta\\_{min}^{i} + \\frac{1}{2}\\left(\\eta\\_{max}^{i}-\\eta\\_{min}^{i}\\right)\\left(1+\\cos\\left(\\frac{T\\_{cur}}{T\\_{i}}\\pi\\right)\\right)\r\n$$\r\n\r\nWhere where $\\eta\\_{min}^{i}$ and $ \\eta\\_{max}^{i}$ are ranges for the learning rate, and $T\\_{cur}$ account for how many epochs have been performed since the last restart.\r\n\r\nText Source: [Jason Brownlee](https://machinelearningmastery.com/snapshot-ensemble-deep-learning-neural-network/)\r\n\r\nImage Source: [Gao Huang](https://www.researchgate.net/figure/Training-loss-of-100-layer-DenseNet-on-CIFAR10-using-standard-learning-rate-blue-and-M_fig2_315765130)",
"full_name": "Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Cosine Annealing",
"source_title": "SGDR: Stochastic Gradient Descent with Warm Restarts",
"source_url": "http://arxiv.org/abs/1608.03983v5"
},
{
"code_snippet_url": null,
"description": "**Linear Warmup With Cosine Annealing** is a learning rate schedule where we increase the learning rate linearly for $n$ updates and then anneal according to a cosine schedule afterwards.",
"full_name": "Linear Warmup With Cosine Annealing",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Learning Rate Schedules** refer to schedules for the learning rate during the training of neural networks. Below you can find a continuously updating list of learning rate schedules.",
"name": "Learning Rate Schedules",
"parent": null
},
"name": "Linear Warmup With Cosine Annealing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/fastai/fastai/blob/43001e17ba469308e9688dfe99a891018bcf7ad4/courses/dl2/imdb_scripts/finetune_lm.py#L132",
"description": "**Discriminative Fine-Tuning** is a fine-tuning strategy that is used for [ULMFiT](https://paperswithcode.com/method/ulmfit) type models. Instead of using the same learning rate for all layers of the model, discriminative fine-tuning allows us to tune each layer with different learning rates. For context, the regular stochastic gradient descent ([SGD](https://paperswithcode.com/method/sgd)) update of a model’s parameters $\\theta$ at time step $t$ looks like the following (Ruder, 2016):\r\n\r\n$$ \\theta\\_{t} = \\theta\\_{t-1} − \\eta\\cdot\\nabla\\_{\\theta}J\\left(\\theta\\right)$$\r\n\r\nwhere $\\eta$ is the learning rate and $\\nabla\\_{\\theta}J\\left(\\theta\\right)$ is the gradient with regard to the model’s objective function. For discriminative fine-tuning, we split the parameters $\\theta$ into {$\\theta\\_{1}, \\ldots, \\theta\\_{L}$} where $\\theta\\_{l}$ contains the parameters of the model at the $l$-th layer and $L$ is the number of layers of the model. Similarly, we obtain {$\\eta\\_{1}, \\ldots, \\eta\\_{L}$} where $\\theta\\_{l}$ where $\\eta\\_{l}$ is the learning rate of the $l$-th layer. The SGD update with discriminative finetuning is then:\r\n\r\n$$ \\theta\\_{t}^{l} = \\theta\\_{t-1}^{l} - \\eta^{l}\\cdot\\nabla\\_{\\theta^{l}}J\\left(\\theta\\right) $$\r\n\r\nThe authors find that empirically it worked well to first choose the learning rate $\\eta^{L}$ of the last layer by fine-tuning only the last layer and using $\\eta^{l-1}=\\eta^{l}/2.6$ as the learning rate for lower layers.",
"full_name": "Discriminative Fine-Tuning",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Fine-Tuning** methods in deep learning take existing trained networks and 'fine-tune' them to a new task so that information contained in the weights can be repurposed. Below you can find a continuously updating list of fine-tuning methods.",
"name": "Fine-Tuning",
"parent": null
},
"name": "Discriminative Fine-Tuning",
"source_title": "Universal Language Model Fine-tuning for Text Classification",
"source_url": "http://arxiv.org/abs/1801.06146v5"
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**GPT** is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture and training procedure for natural language processing tasks. Training follows a two-stage procedure. First, a language modeling objective is used on\r\nthe unlabeled data to learn the initial parameters of a neural network model. Subsequently, these parameters are adapted to a target task using the corresponding supervised objective.",
"full_name": "GPT",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "GPT",
"source_title": "Improving Language Understanding by Generative Pre-Training",
"source_url": "https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf"
}
] |
https://paperswithcode.com/paper/real-time-graph-based-point-cloud-networks-on
|
2507.05099
| null | null |
Real-Time Graph-based Point Cloud Networks on FPGAs via Stall-Free Deep Pipelining
|
Graph-based Point Cloud Networks (PCNs) are powerful tools for processing sparse sensor data with irregular geometries, as found in high-energy physics detectors. However, deploying models in such environments remains challenging due to stringent real-time requirements for both latency, and throughput. In this work, we present a deeply pipelined dataflow architecture for executing graph-based PCNs on FPGAs. Our method supports efficient processing of dynamic, sparse point clouds while meeting hard real-time constraints. We introduce specialized processing elements for core graph operations, such as GraVNet convolution and condensation point clustering, and demonstrate our design on the AMD Versal VCK190. Compared to a GPU baseline, our FPGA implementation achieves up to 5.25x speedup in throughput while maintaining latencies below 10 {\mu}s, satisfying the demands of real-time trigger systems in particle physics experiments. An open-source reference implementation is provided.
| null |
https://arxiv.org/abs/2507.05099v1
|
https://arxiv.org/pdf/2507.05099v1.pdf
| null |
[
"Marc Neu",
"Isabel Haide",
"Timo Justinger",
"Till Rädler",
"Valdrin Dajaku",
"Torben Ferber",
"Jürgen Becker"
] |
[
"GPU"
] | 2025-07-07T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)",
"full_name": "Convolution",
"introduced_year": 1980,
"main_collection": {
"area": "Computer Vision",
"description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.",
"name": "Convolutions",
"parent": "Image Feature Extractors"
},
"name": "Convolution",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/da4ml-distributed-arithmetic-for-real-time
|
2507.04535
| null | null |
da4ml: Distributed Arithmetic for Real-time Neural Networks on FPGAs
|
Neural networks with a latency requirement on the order of microseconds, like the ones used at the CERN Large Hadron Collider, are typically deployed on FPGAs fully unrolled and pipelined. A bottleneck for the deployment of such neural networks is area utilization, which is directly related to the required constant matrix-vector multiplication (CMVM) operations. In this work, we propose an efficient algorithm for implementing CMVM operations with distributed arithmetic (DA) on FPGAs that simultaneously optimizes for area consumption and latency. The algorithm achieves resource reduction similar to state-of-the-art algorithms while being significantly faster to compute. The proposed algorithm is open-sourced and integrated into the \texttt{hls4ml} library, a free and open-source library for running real-time neural network inference on FPGAs. We show that the proposed algorithm can reduce on-chip resources by up to a third for realistic, highly quantized neural networks while simultaneously reducing latency, enabling the implementation of previously infeasible networks.
| null |
https://arxiv.org/abs/2507.04535v1
|
https://arxiv.org/pdf/2507.04535v1.pdf
| null |
[
"Chang Sun",
"Zhiqiang Que",
"Vladimir Loncar",
"Wayne Luk",
"Maria Spiropulu"
] |
[] | 2025-07-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/discovering-the-underlying-analytic-structure
|
2507.00225
| null | null |
Discovering the underlying analytic structure within Standard Model constants using artificial intelligence
|
This paper presents a search for underlying analytic structures among the fundamental parameters of the Standard Model (SM) using symbolic regression and genetic programming. We identify the simplest analytic relationships connecting pairs of these constants and report several notable observations based on about a thousand expressions with relative precision better than 1%. These results may serve as valuable inputs for model builders and artificial intelligence methods aimed at uncovering hidden patterns among the SM constants, or potentially used as building blocks for a deeper underlying law that connects all parameters of the SM through a small set of fundamental constants.
| null |
https://arxiv.org/abs/2507.00225v1
|
https://arxiv.org/pdf/2507.00225v1.pdf
| null |
[
"S. V. Chekanov",
"H. Kjellerstrand"
] |
[
"Symbolic Regression"
] | 2025-06-30T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "Dynamic Sparse Training method where weight mask is updated randomly periodically",
"full_name": "Sparse Evolutionary Training",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Sparsity",
"parent": null
},
"name": "SET",
"source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science",
"source_url": "http://arxiv.org/abs/1707.04780v2"
}
] |
https://paperswithcode.com/paper/transforming-calabi-yau-constructions
|
2507.03732
| null | null |
Transforming Calabi-Yau Constructions: Generating New Calabi-Yau Manifolds with Transformers
|
Fine, regular, and star triangulations (FRSTs) of four-dimensional reflexive polytopes give rise to toric varieties, within which generic anticanonical hypersurfaces yield smooth Calabi-Yau threefolds. We employ transformers -- deep learning models originally developed for language modeling -- to generate FRSTs across a range of polytope sizes. Our models exhibit efficient and unbiased sampling, and can self-improve through retraining on their own output. These results lay the foundation for AICY: a community-driven platform that combines self-improving machine learning models with a continuously expanding FRST database to explore and catalog the Calabi-Yau landscape.
| null |
https://arxiv.org/abs/2507.03732v1
|
https://arxiv.org/pdf/2507.03732v1.pdf
| null |
[
"Jacky H. T. Yip",
"Charles Arnal",
"Francois Charton",
"Gary Shiu"
] |
[
"Language Modeling",
"Language Modelling"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/neural-simulation-based-inference-of-the-1
|
2507.02032
| null | null |
Neural simulation-based inference of the Higgs trilinear self-coupling via off-shell Higgs production
|
One of the forthcoming major challenges in particle physics is the experimental determination of the Higgs trilinear self-coupling. While efforts have largely focused on on-shell double- and single-Higgs production in proton-proton collisions, off-shell Higgs production has also been proposed as a valuable complementary probe. In this article, we design a hybrid neural simulation-based inference (NSBI) approach to construct a likelihood of the Higgs signal incorporating modifications from the Standard Model effective field theory (SMEFT), relevant background processes, and quantum interference effects. It leverages the training efficiency of matrix-element-enhanced techniques, which are vital for robust SMEFT applications, while also incorporating the practical advantages of classification-based methods for effective background estimates. We demonstrate that our NSBI approach achieves sensitivity close to the theoretical optimum and provide expected constraints for the high-luminosity upgrade of the Large Hadron Collider. While we primarily concentrate on the Higgs trilinear self-coupling, we also consider constraints on other SMEFT operators that affect off-shell Higgs production.
| null |
https://arxiv.org/abs/2507.02032v1
|
https://arxiv.org/pdf/2507.02032v1.pdf
| null |
[
"Aishik Ghosh",
"Maximilian Griese",
"Ulrich Haisch",
"Tae Hyoun Park"
] |
[] | 2025-07-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-pole-structures-of-hadronic-states
|
2507.07668
| null | null |
Learning Pole Structures of Hadronic States using Predictive Uncertainty Estimation
|
Matching theoretical predictions to experimental data remains a central challenge in hadron spectroscopy. In particular, the identification of new hadronic states is difficult, as exotic signals near threshold can arise from a variety of physical mechanisms. A key diagnostic in this context is the pole structure of the scattering amplitude, but different configurations can produce similar signatures. The mapping between pole configurations and line shapes is especially ambiguous near the mass threshold, where analytic control is limited. In this work, we introduce an uncertainty-aware machine learning approach for classifying pole structures in $S$-matrix elements. Our method is based on an ensemble of classifier chains that provide both epistemic and aleatoric uncertainty estimates. We apply a rejection criterion based on predictive uncertainty, achieving a validation accuracy of nearly $95\%$ while discarding only a small fraction of high-uncertainty predictions. Trained on synthetic data with known pole structures, the model generalizes to previously unseen experimental data, including enhancements associated with the $P_{c\bar{c}}(4312)^+$ state observed by LHCb. In this, we infer a four-pole structure, representing the presence of a genuine compact pentaquark in the presence of a higher channel virtual state pole with non-vanishing width. While evaluated on this particular state, our framework is broadly applicable to other candidate hadronic states and offers a scalable tool for pole structure inference in scattering amplitudes.
| null |
https://arxiv.org/abs/2507.07668v2
|
https://arxiv.org/pdf/2507.07668v2.pdf
| null |
[
"Felix Frohnert",
"Denny Lane B. Sombillo",
"Evert van Nieuwenburg",
"Patrick Emonts"
] |
[
"Diagnostic"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-neural-networks-with-tensor-weights-and
|
2507.05303
| null | null |
The Neural Networks with Tensor Weights and the Corresponding Fermionic Quantum Field Theory
|
In this paper, we establish a theoretical connection between complex-valued neural networks (CVNNs) and fermionic quantum field theory (QFT), bridging a fundamental gap in the emerging framework of neural network quantum field theory (NN-QFT). While prior NN-QFT works have linked real-valued architectures to bosonic fields, we demonstrate that CVNNs equipped with tensor-valued weights intrinsically generate fermionic quantum fields. By promoting hidden-to-output weights to Clifford algebra-valued tensors, we induce anticommutation relations essential for fermionic statistics. Through analytical study of the generating functional, we obtain the exact quantum state in the infinite-width limit, revealing that the parameters between the input layer and the last hidden layer correspond to the eigenvalues of the quantum system, and the tensor weighting parameters in the hidden-to-output layer map to dynamical fermionic fields. The continuum limit reproduces free fermion correlators, with diagrammatic expansions confirming anticommutation. The work provides the first explicit mapping from neural architectures to fermionic QFT at the level of correlation functions and generating functional. It extends NN-QFT beyond bosonic theories and opens avenues for encoding fermionic symmetries into machine learning models, with potential applications in quantum simulation and lattice field theory.
| null |
https://arxiv.org/abs/2507.05303v1
|
https://arxiv.org/pdf/2507.05303v1.pdf
| null |
[
"Guojun Huang",
"Kai Zhou"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pgt-i-scaling-spatiotemporal-gnns-with-memory
|
2507.11683
| null | null |
PGT-I: Scaling Spatiotemporal GNNs with Memory-Efficient Distributed Training
|
Spatiotemporal graph neural networks (ST-GNNs) are powerful tools for modeling spatial and temporal data dependencies. However, their applications have been limited primarily to small-scale datasets because of memory constraints. While distributed training offers a solution, current frameworks lack support for spatiotemporal models and overlook the properties of spatiotemporal data. Informed by a scaling study on a large-scale workload, we present PyTorch Geometric Temporal Index (PGT-I), an extension to PyTorch Geometric Temporal that integrates distributed data parallel training and two novel strategies: index-batching and distributed-index-batching. Our index techniques exploit spatiotemporal structure to construct snapshots dynamically at runtime, significantly reducing memory overhead, while distributed-index-batching extends this approach by enabling scalable processing across multiple GPUs. Our techniques enable the first-ever training of an ST-GNN on the entire PeMS dataset without graph partitioning, reducing peak memory usage by up to 89\% and achieving up to a 13.1x speedup over standard DDP with 128 GPUs.
|
Spatiotemporal graph neural networks (ST-GNNs) are powerful tools for modeling spatial and temporal data dependencies.
|
https://arxiv.org/abs/2507.11683v1
|
https://arxiv.org/pdf/2507.11683v1.pdf
| null |
[
"Seth Ockerman",
"Amal Gueroudji",
"Tanwi Mallick",
"Yixuan He",
"Line Pouchard",
"Robert Ross",
"Shivaram Venkataraman"
] |
[
"graph partitioning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rohoi-robustness-benchmark-for-human-object
|
2507.09111
| null | null |
RoHOI: Robustness Benchmark for Human-Object Interaction Detection
|
Human-Object Interaction (HOI) detection is crucial for robot-human assistance, enabling context-aware support. However, models trained on clean datasets degrade in real-world conditions due to unforeseen corruptions, leading to inaccurate prediction. To address this, we introduce the first robustness benchmark for HOI detection, evaluating model resilience under diverse challenges. Despite advances, current models struggle with environmental variability, occlusion, and noise. Our benchmark, RoHOI, includes 20 corruption types based on HICO-DET and V-COCO datasets and a new robustness-focused metric. We systematically analyze existing models in the related field, revealing significant performance drops under corruptions. To improve robustness, we propose a Semantic-Aware Masking-based Progressive Learning (SAMPL) strategy to guide the model to be optimized based on holistic and partial cues, dynamically adjusting the model's optimization to enhance robust feature learning. Extensive experiments show our approach outperforms state-of-the-art methods, setting a new standard for robust HOI detection. Benchmarks, datasets, and code will be made publicly available at https://github.com/Kratos-Wen/RoHOI.
| null |
https://arxiv.org/abs/2507.09111v1
|
https://arxiv.org/pdf/2507.09111v1.pdf
| null |
[
"Di Wen",
"Kunyu Peng",
"Kailun Yang",
"Yufan Chen",
"Ruiping Liu",
"Junwei Zheng",
"Alina Roitberg",
"Rainer Stiefelhagen"
] |
[
"Human-Object Interaction Detection",
"Object"
] | 2025-07-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mitigating-plasticity-loss-in-continual
|
2506.00592
| null | null |
Mitigating Plasticity Loss in Continual Reinforcement Learning by Reducing Churn
|
Plasticity, or the ability of an agent to adapt to new tasks, environments, or distributions, is crucial for continual learning. In this paper, we study the loss of plasticity in deep continual RL from the lens of churn: network output variability for out-of-batch data induced by mini-batch training. We demonstrate that (1) the loss of plasticity is accompanied by the exacerbation of churn due to the gradual rank decrease of the Neural Tangent Kernel (NTK) matrix; (2) reducing churn helps prevent rank collapse and adjusts the step size of regular RL gradients adaptively. Moreover, we introduce Continual Churn Approximated Reduction (C-CHAIN) and demonstrate it improves learning performance and outperforms baselines in a diverse range of continual learning environments on OpenAI Gym Control, ProcGen, DeepMind Control Suite, and MinAtar benchmarks.
| null |
https://arxiv.org/abs/2506.00592v1
|
https://arxiv.org/pdf/2506.00592v1.pdf
| null |
[
"Hongyao Tang",
"Johan Obando-Ceron",
"Pablo Samuel Castro",
"Aaron Courville",
"Glen Berseth"
] |
[
"Continual Learning",
"OpenAI Gym"
] | 2025-05-31T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/addressing-imbalanced-domain-incremental
|
2507.07100
| null | null |
Addressing Imbalanced Domain-Incremental Learning through Dual-Balance Collaborative Experts
|
Domain-Incremental Learning (DIL) focuses on continual learning in non-stationary environments, requiring models to adjust to evolving domains while preserving historical knowledge. DIL faces two critical challenges in the context of imbalanced data: intra-domain class imbalance and cross-domain class distribution shifts. These challenges significantly hinder model performance, as intra-domain imbalance leads to underfitting of few-shot classes, while cross-domain shifts require maintaining well-learned many-shot classes and transferring knowledge to improve few-shot class performance in old domains. To overcome these challenges, we introduce the Dual-Balance Collaborative Experts (DCE) framework. DCE employs a frequency-aware expert group, where each expert is guided by specialized loss functions to learn features for specific frequency groups, effectively addressing intra-domain class imbalance. Subsequently, a dynamic expert selector is learned by synthesizing pseudo-features through balanced Gaussian sampling from historical class statistics. This mechanism navigates the trade-off between preserving many-shot knowledge of previous domains and leveraging new data to improve few-shot class performance in earlier tasks. Extensive experimental results on four benchmark datasets demonstrate DCE's state-of-the-art performance.
| null |
https://arxiv.org/abs/2507.07100v1
|
https://arxiv.org/pdf/2507.07100v1.pdf
| null |
[
"Lan Li",
"Da-Wei Zhou",
"Han-Jia Ye",
"De-Chuan Zhan"
] |
[
"Continual Learning",
"Incremental Learning"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/l3a-label-augmented-analytic-adaptation-for
|
2506.00816
| null | null |
L3A: Label-Augmented Analytic Adaptation for Multi-Label Class Incremental Learning
|
Class-incremental learning (CIL) enables models to learn new classes continually without forgetting previously acquired knowledge. Multi-label CIL (MLCIL) extends CIL to a real-world scenario where each sample may belong to multiple classes, introducing several challenges: label absence, which leads to incomplete historical information due to missing labels, and class imbalance, which results in the model bias toward majority classes. To address these challenges, we propose Label-Augmented Analytic Adaptation (L3A), an exemplar-free approach without storing past samples. L3A integrates two key modules. The pseudo-label (PL) module implements label augmentation by generating pseudo-labels for current phase samples, addressing the label absence problem. The weighted analytic classifier (WAC) derives a closed-form solution for neural networks. It introduces sample-specific weights to adaptively balance the class contribution and mitigate class imbalance. Experiments on MS-COCO and PASCAL VOC datasets demonstrate that L3A outperforms existing methods in MLCIL tasks. Our code is available at https://github.com/scut-zx/L3A.
| null |
https://arxiv.org/abs/2506.00816v1
|
https://arxiv.org/pdf/2506.00816v1.pdf
| null |
[
"Xiang Zhang",
"Run He",
"Jiao Chen",
"Di Fang",
"Ming Li",
"Ziqian Zeng",
"Cen Chen",
"Huiping Zhuang"
] |
[
"class-incremental learning",
"Class Incremental Learning",
"Exemplar-Free",
"Incremental Learning",
"Missing Labels",
"Pseudo Label"
] | 2025-06-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/idpa-instance-decoupled-prompt-attention-for
|
2506.00406
| null | null |
iDPA: Instance Decoupled Prompt Attention for Incremental Medical Object Detection
|
Existing prompt-based approaches have demonstrated impressive performance in continual learning, leveraging pre-trained large-scale models for classification tasks; however, the tight coupling between foreground-background information and the coupled attention between prompts and image-text tokens present significant challenges in incremental medical object detection tasks, due to the conceptual gap between medical and natural domains. To overcome these challenges, we introduce the \method~framework, which comprises two main components: 1) Instance-level Prompt Generation (\ipg), which decouples fine-grained instance-level knowledge from images and generates prompts that focus on dense predictions, and 2) Decoupled Prompt Attention (\dpa), which decouples the original prompt attention, enabling a more direct and efficient transfer of prompt information while reducing memory usage and mitigating catastrophic forgetting. We collect 13 clinical, cross-modal, multi-organ, and multi-category datasets, referred to as \dataset, and experiments demonstrate that \method~outperforms existing SOTA methods, with FAP improvements of 5.44\%, 4.83\%, 12.88\%, and 4.59\% in full data, 1-shot, 10-shot, and 50-shot settings, respectively.
| null |
https://arxiv.org/abs/2506.00406v1
|
https://arxiv.org/pdf/2506.00406v1.pdf
| null |
[
"Huahui Yi",
"Wei Xu",
"Ziyuan Qin",
"Xi Chen",
"Xiaohu Wu",
"Kang Li",
"Qicheng Lao"
] |
[
"Continual Learning",
"Medical Object Detection",
"object-detection",
"Object Detection"
] | 2025-05-31T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/freeaudio-training-free-timing-planning-for
|
2507.08557
| null | null |
FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation
|
Text-to-audio (T2A) generation has achieved promising results with the recent advances in generative models. However, because of the limited quality and quantity of temporally-aligned audio-text pairs, existing T2A methods struggle to handle the complex text prompts that contain precise timing control, e.g., "owl hooted at 2.4s-5.2s". Recent works have explored data augmentation techniques or introduced timing conditions as model inputs to enable timing-conditioned 10-second T2A generation, while their synthesis quality is still limited. In this work, we propose a novel training-free timing-controlled T2A framework, FreeAudio, making the first attempt to enable timing-controlled long-form T2A generation, e.g., "owl hooted at 2.4s-5.2s and crickets chirping at 0s-24s". Specifically, we first employ an LLM to plan non-overlapping time windows and recaption each with a refined natural language description, based on the input text and timing prompts. Then we introduce: 1) Decoupling and Aggregating Attention Control for precise timing control; 2) Contextual Latent Composition for local smoothness and Reference Guidance for global consistency. Extensive experiments show that: 1) FreeAudio achieves state-of-the-art timing-conditioned T2A synthesis quality among training-free methods and is comparable to leading training-based methods; 2) FreeAudio demonstrates comparable long-form generation quality with training-based Stable Audio and paves the way for timing-controlled long-form T2A synthesis. Demo samples are available at: https://freeaudio.github.io/FreeAudio/
| null |
https://arxiv.org/abs/2507.08557v1
|
https://arxiv.org/pdf/2507.08557v1.pdf
| null |
[
"YuXuan Jiang",
"Zehua Chen",
"Zeqian Ju",
"Chang Li",
"Weibei Dou",
"Jun Zhu"
] |
[
"Audio Generation",
"Data Augmentation",
"Form"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/midi-valle-improving-expressive-piano
|
2507.08530
| null | null |
MIDI-VALLE: Improving Expressive Piano Performance Synthesis Through Neural Codec Language Modelling
|
Generating expressive audio performances from music scores requires models to capture both instrument acoustics and human interpretation. Traditional music performance synthesis pipelines follow a two-stage approach, first generating expressive performance MIDI from a score, then synthesising the MIDI into audio. However, the synthesis models often struggle to generalise across diverse MIDI sources, musical styles, and recording environments. To address these challenges, we propose MIDI-VALLE, a neural codec language model adapted from the VALLE framework, which was originally designed for zero-shot personalised text-to-speech (TTS) synthesis. For performance MIDI-to-audio synthesis, we improve the architecture to condition on a reference audio performance and its corresponding MIDI. Unlike previous TTS-based systems that rely on piano rolls, MIDI-VALLE encodes both MIDI and audio as discrete tokens, facilitating a more consistent and robust modelling of piano performances. Furthermore, the model's generalisation ability is enhanced by training on an extensive and diverse piano performance dataset. Evaluation results show that MIDI-VALLE significantly outperforms a state-of-the-art baseline, achieving over 75% lower Frechet Audio Distance on the ATEPP and Maestro datasets. In the listening test, MIDI-VALLE received 202 votes compared to 58 for the baseline, demonstrating improved synthesis quality and generalisation across diverse performance MIDI inputs.
| null |
https://arxiv.org/abs/2507.08530v1
|
https://arxiv.org/pdf/2507.08530v1.pdf
| null |
[
"Jingjing Tang",
"Xin Wang",
"Zhe Zhang",
"Junichi Yamagishi",
"Geraint Wiggins",
"George Fazekas"
] |
[
"Audio Synthesis",
"Language Modelling",
"text-to-speech",
"Text to Speech"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/complex-non-backtracking-matrix-for-directed
|
2507.12503
| null | null |
Complex non-backtracking matrix for directed graphs
|
Graph representation matrices are essential tools in graph data analysis. Recently, Hermitian adjacency matrices have been proposed to investigate directed graph structures. Previous studies have demonstrated that these matrices can extract valuable information for clustering. In this paper, we propose the complex non-backtracking matrix that integrates the properties of the Hermitian adjacency matrix and the non-backtracking matrix. The proposed matrix has similar properties with the non-backtracking matrix of undirected graphs. We reveal relationships between the complex non-backtracking matrix and the Hermitian adjacency matrix. Also, we provide intriguing insights that this matrix representation holds cluster information, particularly for sparse directed graphs.
| null |
https://arxiv.org/abs/2507.12503v1
|
https://arxiv.org/pdf/2507.12503v1.pdf
| null |
[
"Keishi Sando",
"Hideitsu Hino"
] |
[] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/zclassifier-temperature-tuning-and-manifold
|
2507.10638
| null | null |
ZClassifier: Temperature Tuning and Manifold Approximation via KL Divergence on Logit Space
|
We introduce a novel classification framework, ZClassifier, that replaces conventional deterministic logits with diagonal Gaussian-distributed logits. Our method simultaneously addresses temperature scaling and manifold approximation by minimizing the Kullback-Leibler (KL) divergence between the predicted Gaussian distributions and a unit isotropic Gaussian. This unifies uncertainty calibration and latent control in a principled probabilistic manner, enabling a natural interpretation of class confidence and geometric consistency. Experiments on CIFAR-10 show that ZClassifier improves over softmax classifiers in robustness, calibration, and latent separation.
|
We introduce a novel classification framework, ZClassifier, that replaces conventional deterministic logits with diagonal Gaussian-distributed logits.
|
https://arxiv.org/abs/2507.10638v2
|
https://arxiv.org/pdf/2507.10638v2.pdf
| null |
[
"Shim Soon Yong"
] |
[
"Out of Distribution (OOD) Detection"
] | 2025-07-14T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/watch-listen-understand-mislead-tri-modal
|
2507.11968
| null | null |
Watch, Listen, Understand, Mislead: Tri-modal Adversarial Attacks on Short Videos for Content Appropriateness Evaluation
|
Multimodal Large Language Models (MLLMs) are increasingly used for content moderation, yet their robustness in short-form video contexts remains underexplored. Current safety evaluations often rely on unimodal attacks, failing to address combined attack vulnerabilities. In this paper, we introduce a comprehensive framework for evaluating the tri-modal safety of MLLMs. First, we present the Short-Video Multimodal Adversarial (SVMA) dataset, comprising diverse short-form videos with human-guided synthetic adversarial attacks. Second, we propose ChimeraBreak, a novel tri-modal attack strategy that simultaneously challenges visual, auditory, and semantic reasoning pathways. Extensive experiments on state-of-the-art MLLMs reveal significant vulnerabilities with high Attack Success Rates (ASR). Our findings uncover distinct failure modes, showing model biases toward misclassifying benign or policy-violating content. We assess results using LLM-as-a-judge, demonstrating attack reasoning efficacy. Our dataset and findings provide crucial insights for developing more robust and safe MLLMs.
| null |
https://arxiv.org/abs/2507.11968v1
|
https://arxiv.org/pdf/2507.11968v1.pdf
| null |
[
"Sahid Hossain Mustakim",
"S M Jishanul Islam",
"Ummay Maria Muna",
"Montasir Chowdhury",
"Mohammed Jawwadul Islam",
"Sadia Ahmmed",
"Tashfia Sikder",
"Syed Tasdid Azam Dhrubo",
"Swakkhar Shatabda"
] |
[] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dapfam-a-domain-aware-patent-retrieval
|
2506.22141
| null | null |
DAPFAM: A Domain-Aware Patent Retrieval Dataset Aggregated at the Family Level
|
In the landscape of publicly available patent retrieval datasets, the need for explicit indomain and out-of-domain labeling, multi-jurisdiction coverage, balanced query domain representation and manageable sizes that support sub document level experiments on moderate computational resources is often overlooked. To address these gaps, we propose DAPFAM, a new open access domain-aware patent retrieval dataset constructed at the simple-family level. The dataset contains 1,247 domain balanced full text query families and 45,336 full text target families. The dataset is enriched by clear relevance judgments (forward/backward citations as positive links, random negatives), as well as explicit in-domain or out-of-domain relationships via a novel proposed labelling scheme based on via International Patent Classification (IPC) codes, resulting in 49,869 evaluation pairs. The dataset is multi jurisdictional, requires little to no preprocessing for retrieval evaluation, and remains of a size manageable for entities with limited ressources allowing for sub document level retrieval experiments without excessive computational costs. We describe our three-step data-curation pipeline, present comprehensive dataset statistics, and provide baseline experiments using lexical and neural retrieval methods. Our baseline experiments highlight significant challenges in crossdomain patent retrieval. The dataset will be publicly available (for now the access link is this repository: https://osf.io/vbyzd/?view_only=1a40242e0d1941a58aa854af3e50cf6b).
| null |
https://arxiv.org/abs/2506.22141v1
|
https://arxiv.org/pdf/2506.22141v1.pdf
| null |
[
"Iliass Ayaou",
"Denis Cavallucci",
"Hicham Chibane"
] |
[
"Patent classification",
"Retrieval"
] | 2025-06-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-fuzzy-approach-to-project-success-measuring
|
2507.12653
| null | null |
A Fuzzy Approach to Project Success: Measuring What Matters
|
This paper introduces a novel approach to project success evaluation by integrating fuzzy logic into an existing construct. Traditional Likert-scale measures often overlook the context-dependent and multifaceted nature of project success. The proposed hierarchical Type-1 Mamdani fuzzy system prioritizes sustained positive impact for end-users, reducing emphasis on secondary outcomes like stakeholder satisfaction and internal project success. This dynamic approach may provide a more accurate measure of project success and could be adaptable to complex evaluations. Future research will focus on empirical testing and broader applications of fuzzy logic in social science.
|
This paper introduces a novel approach to project success evaluation by integrating fuzzy logic into an existing construct.
|
https://arxiv.org/abs/2507.12653v1
|
https://arxiv.org/pdf/2507.12653v1.pdf
| null |
[
"João Granja-Correia",
"Remedios Hernández-Linares",
"Luca Ferranti",
"Arménio Rego"
] |
[] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/from-roots-to-rewards-dynamic-tree-reasoning
|
2507.13142
| null | null |
From Roots to Rewards: Dynamic Tree Reasoning with RL
|
Modern language models address complex questions through chain-of-thought (CoT) reasoning (Wei et al., 2023) and retrieval augmentation (Lewis et al., 2021), yet struggle with error propagation and knowledge integration. Tree-structured reasoning methods, particularly the Probabilistic Tree-of-Thought (ProbTree)(Cao et al., 2023) framework, mitigate these issues by decomposing questions into hierarchical structures and selecting answers through confidence-weighted aggregation of parametric and retrieved knowledge (Yao et al., 2023). However, ProbTree's static implementation introduces two key limitations: (1) the reasoning tree is fixed during the initial construction phase, preventing dynamic adaptation to intermediate results, and (2) each node requires exhaustive evaluation of all possible solution strategies, creating computational inefficiency. We present a dynamic reinforcement learning (Sutton and Barto, 2018) framework that transforms tree-based reasoning into an adaptive process. Our approach incrementally constructs the reasoning tree based on real-time confidence estimates, while learning optimal policies for action selection (decomposition, retrieval, or aggregation). This maintains ProbTree's probabilistic rigor while improving both solution quality and computational efficiency through selective expansion and focused resource allocation. The work establishes a new paradigm for treestructured reasoning that balances the reliability of probabilistic frameworks with the flexibility required for real-world question answering systems.
|
Modern language models address complex questions through chain-of-thought (CoT) reasoning (Wei et al., 2023) and retrieval augmentation (Lewis et al., 2021), yet struggle with error propagation and knowledge integration.
|
https://arxiv.org/abs/2507.13142v1
|
https://arxiv.org/pdf/2507.13142v1.pdf
| null |
[
"Ahmed Bahloul",
"Simon Malberg"
] |
[
"Computational Efficiency",
"Question Answering",
"Retrieval"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/investigating-forecasting-models-for-pandemic
|
2507.12966
| null | null |
Investigating Forecasting Models for Pandemic Infections Using Heterogeneous Data Sources: A 2-year Study with COVID-19
|
Emerging in December 2019, the COVID-19 pandemic caused widespread health, economic, and social disruptions. Rapid global transmission overwhelmed healthcare systems, resulting in high infection rates, hospitalisations, and fatalities. To minimise the spread, governments implemented several non-pharmaceutical interventions like lockdowns and travel restrictions. While effective in controlling transmission, these measures also posed significant economic and societal challenges. Although the WHO declared COVID-19 no longer a global health emergency in May 2023, its impact persists, shaping public health strategies. The vast amount of data collected during the pandemic offers valuable insights into disease dynamics, transmission, and intervention effectiveness. Leveraging these insights can improve forecasting models, enhancing preparedness and response to future outbreaks while mitigating their social and economic impact. This paper presents a large-scale case study on COVID-19 forecasting in Cyprus, utilising a two-year dataset that integrates epidemiological data, vaccination records, policy measures, and weather conditions. We analyse infection trends, assess forecasting performance, and examine the influence of external factors on disease dynamics. The insights gained contribute to improved pandemic preparedness and response strategies.
| null |
https://arxiv.org/abs/2507.12966v1
|
https://arxiv.org/pdf/2507.12966v1.pdf
| null |
[
"Zacharias Komodromos",
"Kleanthis Malialis",
"Panayiotis Kolios"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dasvit-differentiable-architecture-search-for
|
2507.13079
| null | null |
DASViT: Differentiable Architecture Search for Vision Transformer
|
Designing effective neural networks is a cornerstone of deep learning, and Neural Architecture Search (NAS) has emerged as a powerful tool for automating this process. Among the existing NAS approaches, Differentiable Architecture Search (DARTS) has gained prominence for its efficiency and ease of use, inspiring numerous advancements. Since the rise of Vision Transformers (ViT), researchers have applied NAS to explore ViT architectures, often focusing on macro-level search spaces and relying on discrete methods like evolutionary algorithms. While these methods ensure reliability, they face challenges in discovering innovative architectural designs, demand extensive computational resources, and are time-intensive. To address these limitations, we introduce Differentiable Architecture Search for Vision Transformer (DASViT), which bridges the gap in differentiable search for ViTs and uncovers novel designs. Experiments show that DASViT delivers architectures that break traditional Transformer encoder designs, outperform ViT-B/16 on multiple datasets, and achieve superior efficiency with fewer parameters and FLOPs.
| null |
https://arxiv.org/abs/2507.13079v1
|
https://arxiv.org/pdf/2507.13079v1.pdf
| null |
[
"Pengjin Wu",
"Ferrante Neri",
"ZhenHua Feng"
] |
[
"Evolutionary Algorithms",
"Neural Architecture Search"
] | 2025-07-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://github.com/google-research/vision_transformer",
"description": "The **Vision Transformer**, or **ViT**, is a model for image classification that employs a [Transformer](https://paperswithcode.com/method/transformer)-like architecture over patches of the image. An image is split into fixed-size patches, each of them are then linearly embedded, position embeddings are added, and the resulting sequence of vectors is fed to a standard [Transformer](https://paperswithcode.com/method/transformer) encoder. In order to perform classification, the standard approach of adding an extra learnable “classification token” to the sequence is used.",
"full_name": "Vision Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "**Image Models** are methods that build representations of images for downstream tasks such as classification and object detection. The most popular subcategory are convolutional neural networks. Below you can find a continuously updated list of image models.",
"name": "Image Models",
"parent": null
},
"name": "Vision Transformer",
"source_title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale",
"source_url": "https://arxiv.org/abs/2010.11929v2"
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/supercm-revisiting-clustering-for-semi
|
2506.23824
| null | null |
Supercm: Revisiting Clustering for Semi-Supervised Learning
|
The development of semi-supervised learning (SSL) has in recent years largely focused on the development of new consistency regularization or entropy minimization approaches, often resulting in models with complex training strategies to obtain the desired results. In this work, we instead propose a novel approach that explicitly incorporates the underlying clustering assumption in SSL through extending a recently proposed differentiable clustering module. Leveraging annotated data to guide the cluster centroids results in a simple end-to-end trainable deep SSL approach. We demonstrate that the proposed model improves the performance over the supervised-only baseline and show that our framework can be used in conjunction with other SSL methods to further boost their performance.
| null |
https://arxiv.org/abs/2506.23824v1
|
https://arxiv.org/pdf/2506.23824v1.pdf
| null |
[
"Durgesh Singh",
"Ahcene Boubekki",
"Robert Jenssen",
"Michael C. Kampffmeyer"
] |
[
"Clustering"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/describe-anything-model-for-visual-question
|
2507.12441
| null | null |
Describe Anything Model for Visual Question Answering on Text-rich Images
|
Recent progress has been made in region-aware vision-language modeling, particularly with the emergence of the Describe Anything Model (DAM). DAM is capable of generating detailed descriptions of any specific image areas or objects without the need for additional localized image-text alignment supervision. We hypothesize that such region-level descriptive capability is beneficial for the task of Visual Question Answering (VQA), especially in challenging scenarios involving images with dense text. In such settings, the fine-grained extraction of textual information is crucial to producing correct answers. Motivated by this, we introduce DAM-QA, a framework with a tailored evaluation protocol, developed to investigate and harness the region-aware capabilities from DAM for the text-rich VQA problem that requires reasoning over text-based information within images. DAM-QA incorporates a mechanism that aggregates answers from multiple regional views of image content, enabling more effective identification of evidence that may be tied to text-related elements. Experiments on six VQA benchmarks show that our approach consistently outperforms the baseline DAM, with a notable 7+ point gain on DocVQA. DAM-QA also achieves the best overall performance among region-aware models with fewer parameters, significantly narrowing the gap with strong generalist VLMs. These results highlight the potential of DAM-like models for text-rich and broader VQA tasks when paired with efficient usage and integration strategies. Our code is publicly available at https://github.com/Linvyl/DAM-QA.git.
|
Recent progress has been made in region-aware vision-language modeling, particularly with the emergence of the Describe Anything Model (DAM).
|
https://arxiv.org/abs/2507.12441v1
|
https://arxiv.org/pdf/2507.12441v1.pdf
| null |
[
"Yen-Linh Vu",
"Dinh-Thang Duong",
"Truong-Binh Duong",
"Anh-Khoi Nguyen",
"Thanh-Huy Nguyen",
"Le Thien Phuc Nguyen",
"Jianhua Xing",
"Xingjian Li",
"Tianyang Wang",
"Ulas Bagci",
"Min Xu"
] |
[
"Descriptive",
"Language Modeling",
"Language Modelling",
"Question Answering",
"Visual Question Answering",
"Visual Question Answering (VQA)"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sim-net-a-multimodal-fusion-network-using
|
2506.18683
| null | null |
SIM-Net: A Multimodal Fusion Network Using Inferred 3D Object Shape Point Clouds from RGB Images for 2D Classification
|
We introduce the Shape-Image Multimodal Network (SIM-Net), a novel 2D image classification architecture that integrates 3D point cloud representations inferred directly from RGB images. Our key contribution lies in a pixel-to-point transformation that converts 2D object masks into 3D point clouds, enabling the fusion of texture-based and geometric features for enhanced classification performance. SIM-Net is particularly well-suited for the classification of digitized herbarium specimens (a task made challenging by heterogeneous backgrounds), non-plant elements, and occlusions that compromise conventional image-based models. To address these issues, SIM-Net employs a segmentation-based preprocessing step to extract object masks prior to 3D point cloud generation. The architecture comprises a CNN encoder for 2D image features and a PointNet-based encoder for geometric features, which are fused into a unified latent space. Experimental evaluations on herbarium datasets demonstrate that SIM-Net consistently outperforms ResNet101, achieving gains of up to 9.9% in accuracy and 12.3% in F-score. It also surpasses several transformer-based state-of-the-art architectures, highlighting the benefits of incorporating 3D structural reasoning into 2D image classification tasks.
| null |
https://arxiv.org/abs/2506.18683v1
|
https://arxiv.org/pdf/2506.18683v1.pdf
| null |
[
"Youcef Sklab",
"Hanane Ariouat",
"Eric Chenin",
"Edi Prifti",
"Jean-Daniel Zucker"
] |
[
"Classification",
"image-classification",
"Image Classification",
"Point Cloud Generation"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/task-specific-audio-coding-for-machines
|
2507.12701
| null | null |
Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine
|
Neural audio codecs, leveraging quantization algorithms, have significantly impacted various speech/audio tasks. While high-fidelity reconstruction is paramount for human perception, audio coding for machines (ACoM) prioritizes efficient compression and downstream task performance, disregarding perceptual nuances. This work introduces an efficient ACoM method that can compress and quantize any chosen intermediate feature representation of an already trained speech/audio downstream model. Our approach employs task-specific loss guidance alongside residual vector quantization (RVQ) losses, providing ultra-low bitrates (i.e., less than 200 bps) with a minimal loss of the downstream model performance. The resulting tokenizer is adaptable to various bitrates and model sizes for flexible deployment. Evaluated on automatic speech recognition and audio classification, our method demonstrates its efficacy and potential for broader task and architectural applicability through appropriate regularization.
| null |
https://arxiv.org/abs/2507.12701v1
|
https://arxiv.org/pdf/2507.12701v1.pdf
| null |
[
"Anastasia Kuznetsova",
"Inseon Jang",
"Wootaek Lim",
"Minje Kim"
] |
[
"Audio Classification",
"Automatic Speech Recognition",
"Quantization",
"speech-recognition",
"Speech Recognition"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/wisva-generative-ai-for-5g-network
|
2506.22456
| null | null |
WISVA: Generative AI for 5G Network Optimization in Smart Warehouses
|
The next decade will usher in a profound transformation of wireless communication, driven by the ever-increasing demand for data-intensive applications and the rapid adoption of emerging technologies. To fully unlock the potential of 5G and beyond, substantial advancements are required in signal processing techniques, innovative network architectures, and efficient spectrum utilization strategies. These advancements facilitate seamless integration of emerging technologies, driving industrial digital transformation and connectivity. This paper introduces a novel Variational Autoencoder (VAE)-based framework, Wireless Infrastructure for Smart Warehouses using VAE (WISVA), designed for accurate indoor radio propagation modeling in automated Industry 4.0 environments such as warehouses and factory floors operating within 5G wireless bands. The research delves into the meticulous creation of training data tensors, capturing complex electromagnetic (EM) wave behaviors influenced by diverse obstacles, and outlines the architecture and training methodology of the proposed VAE model. The model's robustness and adaptability are showcased through its ability to predict signal-to-interference-plus-noise ratio (SINR) heatmaps across various scenarios, including denoising tasks, validation datasets, extrapolation to unseen configurations, and previously unencountered warehouse layouts. Compelling reconstruction error heatmaps are presented, highlighting the superior accuracy of WISVA compared to traditional autoencoder models. The paper also analyzes the model's performance in handling complex smart warehouse environments, demonstrating its potential as a key enabler for optimizing wireless infrastructure in Industry 4.0.
| null |
https://arxiv.org/abs/2506.22456v1
|
https://arxiv.org/pdf/2506.22456v1.pdf
| null |
[
"Rahul Gulia",
"Amlan Ganguly",
"Andres Kwasinski",
"Michael E. Kuhl",
"Ehsan Rashedi",
"Clark Hochgraf"
] |
[
"Denoising"
] | 2025-06-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semcse-semantic-contrastive-sentence
|
2507.13105
| null | null |
SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts
|
We introduce SemCSE, an unsupervised method for learning semantic embeddings of scientific texts. Building on recent advances in contrastive learning for text embeddings, our approach leverages LLM-generated summaries of scientific abstracts to train a model that positions semantically related summaries closer together in the embedding space. This resulting objective ensures that the model captures the true semantic content of a text, in contrast to traditional citation-based approaches that do not necessarily reflect semantic similarity. To validate this, we propose a novel benchmark designed to assess a model's ability to understand and encode the semantic content of scientific texts, demonstrating that our method enforces a stronger semantic separation within the embedding space. Additionally, we evaluate SemCSE on the comprehensive SciRepEval benchmark for scientific text embeddings, where it achieves state-of-the-art performance among models of its size, thus highlighting the benefits of a semantically focused training approach.
| null |
https://arxiv.org/abs/2507.13105v1
|
https://arxiv.org/pdf/2507.13105v1.pdf
| null |
[
"Marc Brinner",
"Sina Zarriess"
] |
[
"Contrastive Learning",
"Semantic Similarity",
"Semantic Textual Similarity",
"Sentence",
"Sentence Embeddings"
] | 2025-07-17T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": null,
"introduced_year": 2000,
"main_collection": {
"area": "Graphs",
"description": "",
"name": "Graph Representation Learning",
"parent": null
},
"name": "Contrastive Learning",
"source_title": null,
"source_url": null
}
] |
https://paperswithcode.com/paper/magic-evaluating-multimodal-cognition-toward
|
2507.07297
| null | null |
MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning
|
Recent advances in large vision-language models have led to impressive performance in visual question answering and multimodal reasoning. However, it remains unclear whether these models genuinely perform grounded visual reasoning or rely on superficial patterns and dataset biases. In this work, we introduce MagiC, a comprehensive benchmark designed to evaluate grounded multimodal cognition, assessing not only answer accuracy but also the quality of step-by-step reasoning and its alignment with relevant visual evidence. Our benchmark includes approximately 5,500 weakly supervised QA examples generated from strong model outputs and 900 human-curated examples with fine-grained annotations, including answers, rationales, and bounding box groundings. We evaluate 15 vision-language models ranging from 7B to 70B parameters across four dimensions: final answer correctness, reasoning validity, grounding fidelity, and self-correction ability. MagiC further includes diagnostic settings to probe model robustness under adversarial visual cues and assess their capacity for introspective error correction. We introduce new metrics such as MagiScore and StepSense, and provide comprehensive analyses that reveal key limitations and opportunities in current approaches to grounded visual reasoning.
| null |
https://arxiv.org/abs/2507.07297v1
|
https://arxiv.org/pdf/2507.07297v1.pdf
| null |
[
"Chengfei Wu",
"Ronald Seoh",
"Bingxuan Li",
"Liqiang Zhang",
"Fengrong Han",
"Dan Goldwasser"
] |
[
"Diagnostic",
"Multimodal Reasoning",
"Question Answering",
"Visual Question Answering",
"Visual Reasoning"
] | 2025-07-09T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/instructflip-exploring-unified-vision
|
2507.12060
| null | null |
InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing
|
Face anti-spoofing (FAS) aims to construct a robust system that can withstand diverse attacks. While recent efforts have concentrated mainly on cross-domain generalization, two significant challenges persist: limited semantic understanding of attack types and training redundancy across domains. We address the first by integrating vision-language models (VLMs) to enhance the perception of visual input. For the second challenge, we employ a meta-domain strategy to learn a unified model that generalizes well across multiple domains. Our proposed InstructFLIP is a novel instruction-tuned framework that leverages VLMs to enhance generalization via textual guidance trained solely on a single domain. At its core, InstructFLIP explicitly decouples instructions into content and style components, where content-based instructions focus on the essential semantics of spoofing, and style-based instructions consider variations related to the environment and camera characteristics. Extensive experiments demonstrate the effectiveness of InstructFLIP by outperforming SOTA models in accuracy and substantially reducing training redundancy across diverse domains in FAS. Project website is available at https://kunkunlin1221.github.io/InstructFLIP.
|
Extensive experiments demonstrate the effectiveness of InstructFLIP by outperforming SOTA models in accuracy and substantially reducing training redundancy across diverse domains in FAS.
|
https://arxiv.org/abs/2507.12060v1
|
https://arxiv.org/pdf/2507.12060v1.pdf
| null |
[
"Kun-Hsiang Lin",
"Yu-Wen Tseng",
"Kang-Yang Huang",
"Jhih-Ciang Wu",
"Wen-Huang Cheng"
] |
[
"Domain Generalization",
"Face Anti-Spoofing",
"Language Modeling",
"Language Modelling"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": null,
"description": "",
"full_name": "Focus",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Focus",
"source_title": "Focus Your Attention (with Adaptive IIR Filters)",
"source_url": "https://arxiv.org/abs/2305.14952v2"
}
] |
https://paperswithcode.com/paper/best-practices-for-large-scale-pixel-wise
|
2507.12590
| null | null |
Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows
|
Crop mapping involves identifying and classifying crop types using spatial data, primarily derived from remote sensing imagery. This study presents the first comprehensive review of large-scale, pixel-wise crop mapping workflows, encompassing both conventional supervised methods and emerging transfer learning approaches. To identify the optimal supervised crop mapping workflows, we conducted systematic experiments, comparing six widely adopted satellite image-based preprocessing methods, alongside eleven supervised pixel-wise classification models. Additionally, we assessed the synergistic impact of varied training sample sizes and variable combinations. Moreover, we identified optimal transfer learning techniques for different magnitudes of domain shift. The evaluation of best methods was conducted across five diverse agricultural sites. Landsat 8 served as the primary satellite data source. Labels come from CDL trusted pixels and field surveys. Our findings reveal three key insights. First, fine-scale interval preprocessing paired with Transformer models consistently delivered optimal performance for both supervised and transferable workflows. RF offered rapid training and competitive performance in conventional supervised learning and direct transfer to similar domains. Second, transfer learning techniques enhanced workflow adaptability, with UDA being effective for homogeneous crop classes while fine-tuning remains robust across diverse scenarios. Finally, workflow choice depends heavily on the availability of labeled samples. With a sufficient sample size, supervised training typically delivers more accurate and generalizable results. Below a certain threshold, transfer learning that matches the level of domain shift is a viable alternative to achieve crop mapping. Repository: Best-Practices-for-Large-Scale-Pixel-Wise-Crop-Mapping-and-Transfer-Learning-Workflows
|
This study presents the first comprehensive review of large-scale, pixel-wise crop mapping workflows, encompassing both conventional supervised methods and emerging transfer learning approaches.
|
https://arxiv.org/abs/2507.12590v1
|
https://arxiv.org/pdf/2507.12590v1.pdf
| null |
[
"Judy Long",
"Tao Liu",
"Sean Alexander Woznicki",
"Miljana Marković",
"Oskar Marko",
"Molly Sears"
] |
[
"Transfer Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/dvfl-net-a-lightweight-distilled-video-focal
|
2507.12426
| null | null |
DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition
|
The landscape of video recognition has evolved significantly, shifting from traditional Convolutional Neural Networks (CNNs) to Transformer-based architectures for improved accuracy. While 3D CNNs have been effective at capturing spatiotemporal dynamics, recent Transformer models leverage self-attention to model long-range spatial and temporal dependencies. Despite achieving state-of-the-art performance on major benchmarks, Transformers remain computationally expensive, particularly with dense video data. To address this, we propose a lightweight Video Focal Modulation Network, DVFL-Net, which distills spatiotemporal knowledge from a large pre-trained teacher into a compact nano student model, enabling efficient on-device deployment. DVFL-Net utilizes knowledge distillation and spatial-temporal feature modulation to significantly reduce computation while preserving high recognition performance. We employ forward Kullback-Leibler (KL) divergence alongside spatio-temporal focal modulation to effectively transfer both local and global context from the Video-FocalNet Base (teacher) to the proposed VFL-Net (student). We evaluate DVFL-Net on UCF50, UCF101, HMDB51, SSV2, and Kinetics-400, benchmarking it against recent state-of-the-art methods in Human Action Recognition (HAR). Additionally, we conduct a detailed ablation study analyzing the impact of forward KL divergence. The results confirm the superiority of DVFL-Net in achieving an optimal balance between performance and efficiency, demonstrating lower memory usage, reduced GFLOPs, and strong accuracy, making it a practical solution for real-time HAR applications.
|
We employ forward Kullback-Leibler (KL) divergence alongside spatio-temporal focal modulation to effectively transfer both local and global context from the Video-FocalNet Base (teacher) to the proposed VFL-Net (student).
|
https://arxiv.org/abs/2507.12426v1
|
https://arxiv.org/pdf/2507.12426v1.pdf
| null |
[
"Hayat Ullah",
"Muhammad Ali Shafique",
"Abbas Khan",
"Arslan Munir"
] |
[
"Benchmarking",
"Knowledge Distillation",
"Spatio-temporal Action Recognition",
"Temporal Action Localization",
"Video Focal Modulation",
"Video Recognition"
] | 2025-07-16T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275",
"description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.",
"full_name": "Dropout",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Dropout",
"source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"source_url": "http://jmlr.org/papers/v15/srivastava14a.html"
},
{
"code_snippet_url": "https://research.google/blog/auto-generated-summaries-in-google-docs/",
"description": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.\r\nSource: [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)",
"full_name": "Knowledge Distillation",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Knowledge Distillation",
"parent": null
},
"name": "Knowledge Distillation",
"source_title": "Distilling the Knowledge in a Neural Network",
"source_url": "http://arxiv.org/abs/1503.02531v1"
},
{
"code_snippet_url": "",
"description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)",
"full_name": "Label Smoothing",
"introduced_year": 1985,
"main_collection": {
"area": "General",
"description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.",
"name": "Regularization",
"parent": null
},
"name": "Label Smoothing",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.",
"full_name": "Byte Pair Encoding",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "",
"name": "Subword Segmentation",
"parent": null
},
"name": "BPE",
"source_title": "Neural Machine Translation of Rare Words with Subword Units",
"source_url": "http://arxiv.org/abs/1508.07909v5"
},
{
"code_snippet_url": "",
"description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)",
"full_name": "Absolute Position Encodings",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Position Embeddings",
"parent": null
},
"name": "Absolute Position Encodings",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
},
{
"code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8",
"description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.",
"full_name": "Layer Normalization",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.",
"name": "Normalization",
"parent": null
},
"name": "Layer Normalization",
"source_title": "Layer Normalization",
"source_url": "http://arxiv.org/abs/1607.06450v1"
},
{
"code_snippet_url": null,
"description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville",
"full_name": "Dense Connections",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.",
"name": "Feedforward Networks",
"parent": null
},
"name": "Dense Connections",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$",
"full_name": "Softmax",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.",
"name": "Output Functions",
"parent": null
},
"name": "Softmax",
"source_title": null,
"source_url": null
},
{
"code_snippet_url": null,
"description": "",
"full_name": "Balanced Selection",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "",
"name": "Active Learning",
"parent": null
},
"name": "BASE",
"source_title": "Active Learning at the ImageNet Scale",
"source_url": "https://arxiv.org/abs/2111.12880v1"
},
{
"code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201",
"description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).",
"full_name": "Transformer",
"introduced_year": 2000,
"main_collection": {
"area": "Natural Language Processing",
"description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.",
"name": "Transformers",
"parent": "Language Models"
},
"name": "Transformer",
"source_title": "Attention Is All You Need",
"source_url": "https://arxiv.org/abs/1706.03762v7"
}
] |
https://paperswithcode.com/paper/iterative-distillation-for-reward-guided-fine
|
2507.00445
| null | null |
Iterative Distillation for Reward-Guided Fine-Tuning of Diffusion Models in Biomolecular Design
|
We address the problem of fine-tuning diffusion models for reward-guided generation in biomolecular design. While diffusion models have proven highly effective in modeling complex, high-dimensional data distributions, real-world applications often demand more than high-fidelity generation, requiring optimization with respect to potentially non-differentiable reward functions such as physics-based simulation or rewards based on scientific knowledge. Although RL methods have been explored to fine-tune diffusion models for such objectives, they often suffer from instability, low sample efficiency, and mode collapse due to their on-policy nature. In this work, we propose an iterative distillation-based fine-tuning framework that enables diffusion models to optimize for arbitrary reward functions. Our method casts the problem as policy distillation: it collects off-policy data during the roll-in phase, simulates reward-based soft-optimal policies during roll-out, and updates the model by minimizing the KL divergence between the simulated soft-optimal policy and the current model policy. Our off-policy formulation, combined with KL divergence minimization, enhances training stability and sample efficiency compared to existing RL-based methods. Empirical results demonstrate the effectiveness and superior reward optimization of our approach across diverse tasks in protein, small molecule, and regulatory DNA design.
| null |
https://arxiv.org/abs/2507.00445v1
|
https://arxiv.org/pdf/2507.00445v1.pdf
| null |
[
"Xingyu Su",
"Xiner Li",
"Masatoshi Uehara",
"Sunwoo Kim",
"Yulai Zhao",
"Gabriele Scalia",
"Ehsan Hajiramezanali",
"Tommaso Biancalani",
"Degui Zhi",
"Shuiwang Ji"
] |
[] | 2025-07-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/domain-adaptive-small-language-models-for
|
2507.10880
| null | null |
Domain-Adaptive Small Language Models for Structured Tax Code Prediction
|
Every day, multinational firms process thousands of transactions, each of which must adhere to tax regulations that vary by jurisdiction and are often nuanced. The determination of product and service tax codes, such as HSN or SAC is a major use case in Tax compliance. An accurate determination of such codes is imperative to avoid any tax penalties. This paper proposes a domain-adaptive small language model (SLM) with an encoder-decoder architecture for the enhanced prediction of product and service tax codes. In this approach, we address the problem of predicting hierarchical tax code sequences using unstructured product and services data. We employ an SLM based upon encoder-decoder architecture as this enables sequential generation of tax codes to capture the hierarchical dependencies present within the tax codes. Our experiments demonstrate that encoder-decoder SLMs can be successfully applied to the sequential prediction of structured tax codes, a domain that remains comparatively unexplored in current NLP research. In this paper, we demonstrate the superior performance of the domain-adaptive encoder-decoder SLMs over flat classifiers when applied to the Harmonized System of Nomenclature (HSN), and achieve superior results compared to decoder-only and encoder-only architectures for structured sequence generation tasks. This approach can also be scaled to other government-mandated tax commodity codes, such as United Nations Standard Products and Services Codes (UNSPSC), or Brazil's Nomenclatura Comum do Mercosul (NCM).
| null |
https://arxiv.org/abs/2507.10880v1
|
https://arxiv.org/pdf/2507.10880v1.pdf
| null |
[
"Souvik Nath",
"Sumit Wadhwa",
"Luiz Perez"
] |
[
"Decoder",
"Small Language Model"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adamuon-adaptive-muon-optimizer
|
2507.11005
| null | null |
AdaMuon: Adaptive Muon Optimizer
|
We propose AdaMuon, an adaptive learning-rate framework built upon the recently validated Muon optimizer, which has demonstrated substantial efficiency gains over AdamW in large-scale model training. AdaMuon augments Muon with two mutually dependent modules: (1) a per-parameter second-moment modulation that captures orthogonal gradient updates to ensure update-level adaptivity, and (2) a RMS-aligned rescaling that regulates the overall update magnitude by aligning it with the intrinsic structure of the parameter space. Empirical results on multiple model scales and learning-rate regimes confirm that AdaMuon consistently outperforms the original Muon, delivering higher acceleration in convergence while maintaining training stability. Our method introduces no additional tuning burden and can be seamlessly integrated into existing Muon training pipelines.
|
We propose AdaMuon, an adaptive learning-rate framework built upon the recently validated Muon optimizer, which has demonstrated substantial efficiency gains over AdamW in large-scale model training.
|
https://arxiv.org/abs/2507.11005v1
|
https://arxiv.org/pdf/2507.11005v1.pdf
| null |
[
"Chongjie Si",
"Debing Zhang",
"Wei Shen"
] |
[] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-what-matters-probabilistic-task
|
2507.12612
| null | null |
Learning What Matters: Probabilistic Task Selection via Mutual Information for Model Finetuning
|
The performance of finetuned large language models (LLMs) hinges critically on the composition of the training mixture. However, selecting an optimal blend of task datasets remains a largely manual, heuristic driven process, with practitioners often relying on uniform or size based sampling strategies. We introduce TASKPGM, a principled and scalable framework for mixture optimization that selects continuous task proportions by minimizing an energy function over a Markov Random Field (MRF). Task relationships are modeled using behavioral divergences such as Jensen Shannon Divergence and Pointwise Mutual Information computed from the predictive distributions of single task finetuned models. Our method yields a closed form solution under simplex constraints and provably balances representativeness and diversity among tasks. We provide theoretical guarantees, including weak submodularity for budgeted variants, and demonstrate consistent empirical improvements on Llama 2 and Mistral across evaluation suites such as MMLU and BIGBench. Beyond performance, TASKPGM offers interpretable insights into task influence and mixture composition, making it a powerful tool for efficient and robust LLM finetuning.
| null |
https://arxiv.org/abs/2507.12612v1
|
https://arxiv.org/pdf/2507.12612v1.pdf
| null |
[
"Prateek Chanda",
"Saral Sureka",
"Parth Pratim Chatterjee",
"KrishnaTeja Killamsetty",
"Nikhil Shivakumar Nayak",
"Ganesh Ramakrishnan"
] |
[
"Diversity",
"MMLU"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/kodezi-chronos-a-debugging-first-language
|
2507.12482
| null | null |
Kodezi Chronos: A Debugging-First Language Model for Repository-Scale, Memory-Driven Code Understanding
|
Large Language Models (LLMs) have advanced code generation and software automation, but are fundamentally constrained by limited inference-time context and lack of explicit code structure reasoning. We introduce Kodezi Chronos, a next-generation architecture for autonomous code understanding, debugging, and maintenance, designed to operate across ultra-long contexts comprising entire codebases, histories, and documentation, all without fixed window limits. Kodezi Chronos leverages a multi-level embedding memory engine, combining vector and graph-based indexing with continuous code-aware retrieval. This enables efficient and accurate reasoning over millions of lines of code, supporting repository-scale comprehension, multi-file refactoring, and real-time self-healing actions. Our evaluation introduces a novel Multi Random Retrieval benchmark, specifically tailored to the software engineering domain. Unlike classical retrieval benchmarks, this method requires the model to resolve arbitrarily distant and obfuscated associations across code artifacts, simulating realistic tasks such as variable tracing, dependency migration, and semantic bug localization. Chronos outperforms prior LLMs and code models, demonstrating a 23% improvement in real-world bug detection and reducing debugging cycles by up to 40% compared to traditional sequence-based approaches. By natively interfacing with IDEs and CI/CD workflows, Chronos enables seamless, autonomous software maintenance, elevating code reliability and productivity while reducing manual effort. These results mark a critical advance toward self-sustaining, continuously optimized software ecosystems.
|
Large Language Models (LLMs) have advanced code generation and software automation, but are fundamentally constrained by limited inference-time context and lack of explicit code structure reasoning.
|
https://arxiv.org/abs/2507.12482v1
|
https://arxiv.org/pdf/2507.12482v1.pdf
| null |
[
"Ishraq Khan",
"Assad Chowdary",
"Sharoz Haseeb",
"Urvish Patel"
] |
[
"Code Generation",
"Language Modeling",
"Language Modelling",
"Retrieval"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-lightweight-u-net-model-for-accurate-skin
| null | null | null |
A Lightweight U-Net Model for Accurate Skin Lesion Segmentation
|
In this paper, a new lightweight U-Net deep learning-based neural network designed for the segmentation of skin lesions is proposed. Segmentation of skin lesions is the most critical step in computer-aided dermatology diagnosis for the early detection of melanoma and other diseases. However, we address the difficulty related to the precise definition of the lesion margins with an eye on the computation cost. We have demonstrated the state-of-the-art performance of DeepSkinSeg in most metrics on dermoscopic images using the PH2 and Human Against Machine (HAM10000) datasets. The metrics of the DeepSkinSeg model were robustness measured as the Intersection over Union (IoU) at 91.49, Dice coefficient at 95.56, precision at 97.97, sensitivity at 96.84, and accuracy at 96.71 for the PH2 dataset. Other standard generalization capabilities for the HAM10000 dataset could be an IoU of 92.97, a Dice coefficient of 96.36, precision at 97.64, sensitivity at 95.10, and an accuracy of 94.59. DeepSkinSeg has a very efficient inference because the model itself is lightweight, proving to be very helpful for real-time dermatological analysis. This work further advanced the computer-aided diagnosis in the task of skin lesion classification, guaranteeing even more promising clinical applications.
| null |
https://ijcsm.researchcommons.org/ijcsm/vol6/iss2/1/
|
https://ijcsm.researchcommons.org/ijcsm/vol6/iss2/1/
|
Iraqi Journal for Computer Science and Mathematics 2025 4
|
[
"Fallah H. Najjar",
"Karrar A. Kadhim",
"Farhan Mohamed",
"Mohd Shafry Mohd Rahim",
"Asniyani Nur Haidar Abdullah"
] |
[
"Lesion Classification",
"Lesion Segmentation",
"Sensitivity",
"Skin Lesion Classification",
"Skin Lesion Segmentation"
] | 2025-04-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dynamic-parameter-memory-temporary-lora
|
2507.09076
| null | null |
Dynamic Parameter Memory: Temporary LoRA-Enhanced LLM for Long-Sequence Emotion Recognition in Conversation
|
Recent research has focused on applying speech large language model (SLLM) to improve speech emotion recognition (SER). However, the inherently high frame rate in speech modality severely limits the signal processing and understanding capabilities of SLLM. For example, a SLLM with a 4K context window can only process 80 seconds of audio at 50Hz feature sampling rate before reaching its capacity limit. Input token compression methods used in SLLM overlook the continuity and inertia of emotions across multiple conversation turns. This paper proposes a Dynamic Parameter Memory (DPM) mechanism with contextual semantics and sentence-level emotion encoding, enabling processing of unlimited-length audio with limited context windows in SLLM. Specifically, DPM progressively encodes sentence-level information and emotions into a temporary LoRA module during inference to effectively "memorize" the contextual information. We trained an emotion SLLM as a backbone and incorporated our DPM into inference for emotion recognition in conversation (ERC). Experimental results on the IEMOCAP dataset show that DPM significantly improves the emotion recognition capabilities of SLLM when processing long audio sequences, achieving state-of-the-art performance.
|
Recent research has focused on applying speech large language model (SLLM) to improve speech emotion recognition (SER).
|
https://arxiv.org/abs/2507.09076v1
|
https://arxiv.org/pdf/2507.09076v1.pdf
| null |
[
"Jialong Mai",
"Xiaofen Xing",
"Yawei Li",
"Zhipeng Li",
"Jingyuan Xing",
"Xiangmin Xu"
] |
[
"4k",
"Emotion Recognition",
"Emotion Recognition in Conversation",
"Large Language Model",
"Sentence",
"Speech Emotion Recognition"
] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tri-learn-graph-fusion-network-for-attributed
|
2507.13620
| null | null |
Tri-Learn Graph Fusion Network for Attributed Graph Clustering
|
In recent years, models based on Graph Convolutional Networks (GCN) have made significant strides in the field of graph data analysis. However, challenges such as over-smoothing and over-compression remain when handling large-scale and complex graph datasets, leading to a decline in clustering quality. Although the Graph Transformer architecture has mitigated some of these issues, its performance is still limited when processing heterogeneous graph data. To address these challenges, this study proposes a novel deep clustering framework that comprising GCN, Autoencoder (AE), and Graph Transformer, termed the Tri-Learn Graph Fusion Network (Tri-GFN). This framework enhances the differentiation and consistency of global and local information through a unique tri-learning mechanism and feature fusion enhancement strategy. The framework integrates GCN, AE, and Graph Transformer modules. These components are meticulously fused by a triple-channel enhancement module, which maximizes the use of both node attributes and topological structures, ensuring robust clustering representation. The tri-learning mechanism allows mutual learning among these modules, while the feature fusion strategy enables the model to capture complex relationships, yielding highly discriminative representations for graph clustering. It surpasses many state-of-the-art methods, achieving an accuracy improvement of approximately 0.87% on the ACM dataset, 14.14 % on the Reuters dataset, and 7.58 % on the USPS dataset. Due to its outstanding performance on the Reuters dataset, Tri-GFN can be applied to automatic news classification, topic retrieval, and related fields.
|
To address these challenges, this study proposes a novel deep clustering framework that comprising GCN, Autoencoder (AE), and Graph Transformer, termed the Tri-Learn Graph Fusion Network (Tri-GFN).
|
https://arxiv.org/abs/2507.13620v1
|
https://arxiv.org/pdf/2507.13620v1.pdf
| null |
[
"Binxiong Li",
"Yuefei Wang",
"Xu Xiang",
"Xue Li",
"Binyu Zhao",
"Heyang Gao",
"Qinyu Zhao",
"Xi Yu"
] |
[
"Clustering",
"Deep Clustering",
"Graph Clustering",
"News Classification"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dense-longitudinal-progress-note-generation
|
2507.14079
| null | null |
DENSE: Longitudinal Progress Note Generation with Temporal Modeling of Heterogeneous Clinical Notes Across Hospital Visits
|
Progress notes are among the most clinically meaningful artifacts in an Electronic Health Record (EHR), offering temporally grounded insights into a patient's evolving condition, treatments, and care decisions. Despite their importance, they are severely underrepresented in large-scale EHR datasets. For instance, in the widely used Medical Information Mart for Intensive Care III (MIMIC-III) dataset, only about $8.56\%$ of hospital visits include progress notes, leaving gaps in longitudinal patient narratives. In contrast, the dataset contains a diverse array of other note types, each capturing different aspects of care. We present DENSE (Documenting Evolving Progress Notes from Scattered Evidence), a system designed to align with clinical documentation workflows by simulating how physicians reference past encounters while drafting progress notes. The system introduces a fine-grained note categorization and a temporal alignment mechanism that organizes heterogeneous notes across visits into structured, chronological inputs. At its core, DENSE leverages a clinically informed retrieval strategy to identify temporally and semantically relevant content from both current and prior visits. This retrieved evidence is used to prompt a large language model (LLM) to generate clinically coherent and temporally aware progress notes. We evaluate DENSE on a curated cohort of patients with multiple visits and complete progress note documentation. The generated notes demonstrate strong longitudinal fidelity, achieving a temporal alignment ratio of $1.089$, surpassing the continuity observed in original notes. By restoring narrative coherence across fragmented documentation, our system supports improved downstream tasks such as summarization, predictive modeling, and clinical decision support, offering a scalable solution for LLM-driven note synthesis in real-world healthcare settings.
| null |
https://arxiv.org/abs/2507.14079v1
|
https://arxiv.org/pdf/2507.14079v1.pdf
| null |
[
"Garapati Keerthana",
"Manik Gupta"
] |
[
"Large Language Model"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/automatic-classification-and-segmentation-of
|
2507.14010
| null | null |
Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations
|
Tunnel lining crack is a crucial indicator of tunnels' safety status. Aiming to classify and segment tunnel cracks with enhanced accuracy and efficiency, this study proposes a two-step deep learning-based method. An automatic tunnel image classification model is developed using the DenseNet-169 in the first step. The proposed crack segmentation model in the second step is based on the DeepLabV3+, whose internal logic is evaluated via a score-weighted visual explanation technique. Proposed method combines tunnel image classification and segmentation together, so that the selected images containing cracks from the first step are segmented in the second step to improve the detection accuracy and efficiency. The superior performances of the two-step method are validated by experiments. The results show that the accuracy and frames per second (FPS) of the tunnel crack classification model are 92.23% and 39.80, respectively, which are higher than other convolutional neural networks (CNN) based and Transformer based models. Also, the intersection over union (IoU) and F1 score of the tunnel crack segmentation model are 57.01% and 67.44%, respectively, outperforming other state-of-the-art models. Moreover, the provided visual explanations in this study are conducive to understanding the "black box" of deep learning-based models. The developed two-stage deep learning-based method integrating visual explanations provides a basis for fast and accurate quantitative assessment of tunnel health status.
| null |
https://arxiv.org/abs/2507.14010v1
|
https://arxiv.org/pdf/2507.14010v1.pdf
| null |
[
"Yong Feng",
"XiaoLei Zhang",
"Shijin Feng",
"Yong Zhao",
"Yihan Chen"
] |
[
"Crack Segmentation",
"Deep Learning",
"image-classification",
"Image Classification"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/leveraging-the-spatial-hierarchy-coarse-to
|
2507.13366
| null | null |
Leveraging the Spatial Hierarchy: Coarse-to-fine Trajectory Generation via Cascaded Hybrid Diffusion
|
Urban mobility data has significant connections with economic growth and plays an essential role in various smart-city applications. However, due to privacy concerns and substantial data collection costs, fine-grained human mobility trajectories are difficult to become publicly available on a large scale. A promising solution to address this issue is trajectory synthesizing. However, existing works often ignore the inherent structural complexity of trajectories, unable to handle complicated high-dimensional distributions and generate realistic fine-grained trajectories. In this paper, we propose Cardiff, a coarse-to-fine Cascaded hybrid diffusion-based trajectory synthesizing framework for fine-grained and privacy-preserving mobility generation. By leveraging the hierarchical nature of urban mobility, Cardiff decomposes the generation process into two distinct levels, i.e., discrete road segment-level and continuous fine-grained GPS-level: (i) In the segment-level, to reduce computational costs and redundancy in raw trajectories, we first encode the discrete road segments into low-dimensional latent embeddings and design a diffusion transformer-based latent denoising network for segment-level trajectory synthesis. (ii) Taking the first stage of generation as conditions, we then design a fine-grained GPS-level conditional denoising network with a noise augmentation mechanism to achieve robust and high-fidelity generation. Additionally, the Cardiff framework not only progressively generates high-fidelity trajectories through cascaded denoising but also flexibly enables a tunable balance between privacy preservation and utility. Experimental results on three large real-world trajectory datasets demonstrate that our method outperforms state-of-the-art baselines in various metrics.
| null |
https://arxiv.org/abs/2507.13366v1
|
https://arxiv.org/pdf/2507.13366v1.pdf
| null |
[
"Baoshen Guo",
"Zhiqing Hong",
"Junyi Li",
"Shenhao Wang",
"Jinhua Zhao"
] |
[
"Denoising",
"Privacy Preserving"
] | 2025-07-08T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/graph-structured-data-analysis-of-component
|
2507.13721
| null | null |
Graph-Structured Data Analysis of Component Failure in Autonomous Cargo Ships Based on Feature Fusion
|
To address the challenges posed by cascading reactions caused by component failures in autonomous cargo ships (ACS) and the uncertainties in emergency decision-making, this paper proposes a novel hybrid feature fusion framework for constructing a graph-structured dataset of failure modes. By employing an improved cuckoo search algorithm (HN-CSA), the literature retrieval efficiency is significantly enhanced, achieving improvements of 7.1% and 3.4% compared to the NSGA-II and CSA search algorithms, respectively. A hierarchical feature fusion framework is constructed, using Word2Vec encoding to encode subsystem/component features, BERT-KPCA to process failure modes/reasons, and Sentence-BERT to quantify the semantic association between failure impact and emergency decision-making. The dataset covers 12 systems, 1,262 failure modes, and 6,150 propagation paths. Validation results show that the GATE-GNN model achieves a classification accuracy of 0.735, comparable to existing benchmarks. Additionally, a silhouette coefficient of 0.641 indicates that the features are highly distinguishable. In the label prediction results, the Shore-based Meteorological Service System achieved an F1 score of 0.93, demonstrating high prediction accuracy. This paper not only provides a solid foundation for failure analysis in autonomous cargo ships but also offers reliable support for fault diagnosis, risk assessment, and intelligent decision-making systems. The link to the dataset is https://github.com/wojiufukele/Graph-Structured-about-CSA.
|
To address the challenges posed by cascading reactions caused by component failures in autonomous cargo ships (ACS) and the uncertainties in emergency decision-making, this paper proposes a novel hybrid feature fusion framework for constructing a graph-structured dataset of failure modes.
|
https://arxiv.org/abs/2507.13721v1
|
https://arxiv.org/pdf/2507.13721v1.pdf
| null |
[
"Zizhao Zhang",
"Tianxiang Zhao",
"Yu Sun",
"Liping Sun",
"Jichuan Kang"
] |
[
"Decision Making",
"Fault Diagnosis"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/csd-var-content-style-decomposition-in-visual
|
2507.13984
| null | null |
CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models
|
Disentangling content and style from a single image, known as content-style decomposition (CSD), enables recontextualization of extracted content and stylization of extracted styles, offering greater creative flexibility in visual synthesis. While recent personalization methods have explored the decomposition of explicit content style, they remain tailored for diffusion models. Meanwhile, Visual Autoregressive Modeling (VAR) has emerged as a promising alternative with a next-scale prediction paradigm, achieving performance comparable to that of diffusion models. In this paper, we explore VAR as a generative framework for CSD, leveraging its scale-wise generation process for improved disentanglement. To this end, we propose CSD-VAR, a novel method that introduces three key innovations: (1) a scale-aware alternating optimization strategy that aligns content and style representation with their respective scales to enhance separation, (2) an SVD-based rectification method to mitigate content leakage into style representations, and (3) an Augmented Key-Value (K-V) memory enhancing content identity preservation. To benchmark this task, we introduce CSD-100, a dataset specifically designed for content-style decomposition, featuring diverse subjects rendered in various artistic styles. Experiments demonstrate that CSD-VAR outperforms prior approaches, achieving superior content preservation and stylization fidelity.
| null |
https://arxiv.org/abs/2507.13984v1
|
https://arxiv.org/pdf/2507.13984v1.pdf
| null |
[
"Quang-Binh Nguyen",
"Minh Luu",
"Quang Nguyen",
"Anh Tran",
"Khoi Nguyen"
] |
[
"Disentanglement"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/tool-to-tool-matching-analysis-based
|
2507.10564
| null | null |
Tool-to-Tool Matching Analysis Based Difference Score Computation Methods for Semiconductor Manufacturing
|
We consider the problem of tool-to-tool matching (TTTM), also called, chamber matching in the context of a semiconductor manufacturing equipment. Traditional TTTM approaches utilize static configuration data or depend on a golden reference which are difficult to obtain in a commercial manufacturing line. Further, existing methods do not extend very well to a heterogeneous setting, where equipment are of different make-and-model, sourced from different equipment vendors. We propose novel TTTM analysis pipelines to overcome these issues. We hypothesize that a mismatched equipment would have higher variance and/or higher number of modes in the data. Our best univariate method achieves a correlation coefficient >0.95 and >0.5 with the variance and number of modes, respectively showing that the proposed methods are effective. Also, the best multivariate method achieves a correlation coefficient >0.75 with the top-performing univariate methods, showing its effectiveness. Finally, we analyze the sensitivity of the multivariate algorithms to the algorithm hyper-parameters.
| null |
https://arxiv.org/abs/2507.10564v1
|
https://arxiv.org/pdf/2507.10564v1.pdf
| null |
[
"Sameera Bharadwaja H.",
"Siddhrath Jandial",
"Shashank S. Agashe",
"Rajesh Kumar Reddy Moore",
"Youngkwan Kim"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/toward-a-public-dataset-of-wide-field
| null | null | null |
Toward a Public Dataset of Wide-Field Astronomical Images Captured with Smartphones
|
Smartphone photography has improved considerably over the years, particularly in low-light conditions. Thanks to better sensors, advanced noise reduction and night modes, smartphones can now capture detailed wide-field astronomical images, from the Moon to deep sky objects. These innovations have transformed them into pocket telescopes, making astrophotography more accessible than ever. In this paper, we present AstroSmartphoneDataset, a dataset of wide-field astronomical images collected since 2021 with various smartphones, and we show how we have used these images to train Deep Learning models for trails detection.
| null |
http://dx.doi.org/10.5220/0013642300003967
|
https://www.scitepress.org/Papers/2025/136423/136423.pdf
|
14th International Conference on Data Science, Technology and Applications (DATA) 2025 10
|
[
"Olivier Parisot",
"Diogo Ramalho Fernandes"
] |
[] | 2025-10-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mgvq-could-vq-vae-beat-vae-a-generalizable
| null | null | null |
MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization
|
Vector Quantized Variational Autoencoders (VQ-VAEs) are fundamental models that compress continuous visual data into discrete tokens. Existing methods have tried to improve the quantization strategy for better reconstruction quality, however, there still exists a large gap between VQ-VAEs and VAEs. To narrow this gap, we propose MGVQ, a novel method to augment the representation capability of discrete codebooks, facilitating easier optimization for codebooks and minimizing information loss, thereby enhancing reconstruction quality. Specifically, we propose to retain the latent dimension to preserve encoded features and incorporate a set of sub-codebooks for quantization. Furthermore, we construct comprehensive zero-shot benchmarks featuring resolutions of 512p and 2k to evaluate the reconstruction performance of existing methods rigorously. MGVQ achieves the state-of-the-art performance on both ImageNet and 8 zero-shot benchmarks across all VQ-VAEs. Notably, compared with SD-VAE, we outperform them on ImageNet significantly, with rFID 0.49 v.s. 0.91, and achieve superior PSNR on all zero-shot benchmarks. These results highlight the superiority of MGVQ in reconstruction and pave the way for preserving fidelity in HD image processing tasks. Code will be publicly available at https://github.com/MKJia/MGVQ
|
MGVQ achieves the state-of-the-art performance on both ImageNet and 8 zero-shot benchmarks across all VQ-VAEs.
|
https://arxiv.org/abs/2507.07997
|
https://arxiv.org/pdf/2507.07997
| null |
[
"Mingkai Jia",
"Wei Yin",
"Xiaotao Hu",
"Jiaxin Guo",
"Xiaoyang Guo",
"Qian Zhang",
"Xiao-Xiao Long",
"Ping Tan"
] |
[
"2k",
"Image Generation",
"Image Reconstruction",
"Quantization"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/grounding-methods-for-neural-symbolic-ai
|
2507.08216
| null | null |
Grounding Methods for Neural-Symbolic AI
|
A large class of Neural-Symbolic (NeSy) methods employs a machine learner to process the input entities, while relying on a reasoner based on First-Order Logic to represent and process more complex relationships among the entities. A fundamental role for these methods is played by the process of logic grounding, which determines the relevant substitutions for the logic rules using a (sub)set of entities. Some NeSy methods use an exhaustive derivation of all possible substitutions, preserving the full expressive power of the logic knowledge. This leads to a combinatorial explosion in the number of ground formulas to consider and, therefore, strongly limits their scalability. Other methods rely on heuristic-based selective derivations, which are generally more computationally efficient, but lack a justification and provide no guarantees of preserving the information provided to and returned by the reasoner. Taking inspiration from multi-hop symbolic reasoning, this paper proposes a parametrized family of grounding methods generalizing classic Backward Chaining. Different selections within this family allow us to obtain commonly employed grounding methods as special cases, and to control the trade-off between expressiveness and scalability of the reasoner. The experimental results show that the selection of the grounding criterion is often as important as the NeSy method itself.
| null |
https://arxiv.org/abs/2507.08216v1
|
https://arxiv.org/pdf/2507.08216v1.pdf
| null |
[
"Rodrigo Castellano Ontiveros",
"Francesco Giannini",
"Marco Gori",
"Giuseppe Marra",
"Michelangelo Diligenti"
] |
[] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scaling-attention-to-very-long-sequences-in
|
2507.08637
| null | null |
Scaling Attention to Very Long Sequences in Linear Time with Wavelet-Enhanced Random Spectral Attention (WERSA)
|
Transformer models are computationally costly on long sequences since regular attention has quadratic $O(n^2)$ time complexity. We introduce Wavelet-Enhanced Random Spectral Attention (WERSA), a novel mechanism of linear $O(n)$ time complexity that is pivotal to enable successful long-sequence processing without the performance trade-off. WERSA merges content-adaptive random spectral features together with multi-resolution Haar wavelets and learnable parameters to selectively attend to informative scales of data while preserving linear efficiency. Large-scale comparisons \textbf{on single GPU} and across various benchmarks (vision, NLP, hierarchical reasoning) and various attention mechanisms (like Multiheaded Attention, Flash-Attention-2, FNet, Linformer, Performer, Waveformer), reveal uniform advantages of WERSA. It achieves best accuracy in all tests. On ArXiv classification, WERSA improves accuracy over vanilla attention by 1.2\% (86.2\% vs 85.0\%) while cutting training time by 81\% (296s vs 1554s) and FLOPS by 73.4\% (26.2G vs 98.4G). Significantly, WERSA excels where vanilla and FlashAttention-2 fail: on ArXiv-128k's extremely lengthy sequences, it achieves best accuracy (79.1\%) and AUC (0.979) among viable methods, operating on data that gives Out-Of-Memory errors to quadratic methods while being \textbf{twice as fast} as Waveformer, its next-best competitor. By significantly reducing computational loads without compromising accuracy, WERSA makes possible more practical, more affordable, long-context models, in particular on low-resource hardware, for more sustainable and more scalable AI development.
|
Transformer models are computationally costly on long sequences since regular attention has quadratic $O(n^2)$ time complexity.
|
https://arxiv.org/abs/2507.08637v1
|
https://arxiv.org/pdf/2507.08637v1.pdf
| null |
[
"Vincenzo Dentamaro"
] |
[
"GPU"
] | 2025-07-11T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "# WERSA: Wavelet-Enhanced Random Spectral Attention\r\n\r\nThis repository provides the official implementation of WERSA, a novel attention mechanism with linear O(n) time complexity, designed to scale Transformer models to very long sequences without a performance trade-off.\r\n\r\nOur paper, \"Scaling Attention to Very Long Sequences in Linear Time with Wavelet-Enhanced Random Spectral Attention (WERSA)\", is available on [arXiv:2507.08637](https://arxiv.org/abs/2507.08637).\r\n\r\n## 🔬 The Science Behind WERSA\r\n\r\nStandard attention mechanisms have a quadratic (O(n²)) complexity that makes processing long sequences impractical. WERSA solves this by combining several powerful principles to achieve linear (O(n)) efficiency while maintaining high performance.\r\n\r\n- **Multi-Resolution Analysis**: Uses Haar wavelet transforms to decompose the input into multiple scales, capturing both local details and global context.\r\n- **Adaptive Filtering**: An MLP generates input-dependent filters and learnable scale_weights modulate each wavelet level, allowing the model to dynamically prioritize the most informative frequency components.\r\n- **Linear Complexity via Random Features**: Uses random feature projection to approximate the softmax kernel, avoiding the computation of the full quadratic attention matrix.\r\n\r\n## ⚙️ Installation\r\n\r\nFirst, ensure you have PyTorch and Hugging Face Transformers installed. Then, install the wersa package directly from this repository.\r\n\r\n```bash\r\n# 1. Install core dependencies (example for CUDA 12.1)\r\npip install torch --index-url https://download.pytorch.org/whl/cu121\r\npip install transformers\r\n\r\n# 2. Install the WERSA package from this repository\r\npip install git+https://github.com/vincenzodentamaro/wersa.git\r\n```\r\n\r\n## 🚀 Quickstart: Building a Qwen-like Model with WERSA\r\n\r\nYou can easily build a Qwen-style causal language model with WERSA attention by importing the `WersaConfig` and `WersaForCausalLM` classes from the package.\r\n\r\n### Building an 8B Parameter Model\r\n\r\nThis snippet creates an ~8B parameter model with a configuration similar to state-of-the-art models like Qwen2-7B.\r\n\r\n```python\r\nfrom wersa import WersaConfig, WersaForCausalLM\r\nfrom transformers import AutoTokenizer\r\n\r\n# Load a compatible tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2-7B\")\r\n\r\n# Define the configuration for the 8B model\r\nconfig_8b = WersaConfig(\r\n vocab_size=len(tokenizer),\r\n pad_token_id=tokenizer.pad_token_id,\r\n hidden_size=4096,\r\n num_hidden_layers=32,\r\n num_attention_heads=32,\r\n intermediate_size=11008,\r\n max_position_embeddings=4096\r\n)\r\n\r\n# Instantiate the model\r\nmodel_8b = WersaForCausalLM(config_8b)\r\nprint(f\"8B Model created with ~{model_8b.num_parameters() / 1e9:.2f}B parameters.\")\r\n```\r\n\r\n### Building a 0.6B Parameter Model\r\n\r\nThis snippet creates a smaller ~0.6B parameter model, perfect for faster experiments or deployment on more constrained hardware.\r\n\r\n```python\r\nfrom wersa import WersaConfig, WersaForCausalLM\r\nfrom transformers import AutoTokenizer\r\n\r\n# Load a compatible tokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2-1.5B\")\r\n\r\n# Define the configuration for the 0.6B model\r\nconfig_0_6b = WersaConfig(\r\n vocab_size=len(tokenizer),\r\n pad_token_id=tokenizer.pad_token_id,\r\n hidden_size=1024,\r\n num_hidden_layers=24,\r\n num_attention_heads=16,\r\n intermediate_size=2816,\r\n max_position_embeddings=1024\r\n)\r\n\r\n# Instantiate the model\r\nmodel_0_6b = WersaForCausalLM(config_0_6b)\r\nprint(f\"0.6B Model created with ~{model_0_6b.num_parameters() / 1e9:.2f}B parameters.\")\r\n```\r\n\r\n## 📖 Training and Examples\r\n\r\nThis repository includes complete scripts to demonstrate how to pre-train these models from scratch and test their generation capabilities.\r\n\r\n- `train_and_generate_1b.py`: A full example for training a ~1B parameter model.\r\n- `train_and_generate_8b.py`: A full example for training the 8B parameter model.\r\n\r\n## 📜 Citation\r\n\r\nIf you find WERSA useful in your research, please consider citing our paper:\r\n\r\n```bibtex\r\n@misc{dentamaro2025scaling,\r\n title={Scaling Attention to Very Long Sequences in Linear Time with Wavelet-Enhanced Random Spectral Attention (WERSA)}, \r\n author={Vincenzo Dentamaro},\r\n year={2025},\r\n eprint={2507.08637},\r\n archivePrefix={arXiv},\r\n primaryClass={cs.LG}\r\n}\r\n```\r\n\r\n## 📄 License\r\n\r\nThis project is licensed under the Apache License 2.0.",
"full_name": "Wavelet-Enhanced Random Spectral Attention",
"introduced_year": 2000,
"main_collection": {
"area": "General",
"description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.",
"name": "Attention Mechanisms",
"parent": "Attention"
},
"name": "WERSA",
"source_title": "Scaling Attention to Very Long Sequences in Linear Time with Wavelet-Enhanced Random Spectral Attention (WERSA)",
"source_url": "https://arxiv.org/abs/2507.08637v1"
}
] |
https://paperswithcode.com/paper/an-end-to-end-dnn-inference-framework-for-the
|
2507.13736
| null | null |
An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC
|
This work presents a multi-layer DNN scheduling framework as an extension of OctopuScheduler, providing an end-to-end flow from PyTorch models to inference on a single SpiNNaker2 chip. Together with a front-end comprised of quantization and lowering steps, the proposed framework enables the edge-based execution of large and complex DNNs up to transformer scale using the neuromorphic platform SpiNNaker2.
| null |
https://arxiv.org/abs/2507.13736v1
|
https://arxiv.org/pdf/2507.13736v1.pdf
| null |
[
"Matthias Jobst",
"Tim Langer",
"Chen Liu",
"Mehmet Alici",
"Hector A. Gonzalez",
"Christian Mayr"
] |
[
"Quantization",
"Scheduling"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-linguistic-prompting-alp-enhances
|
2507.13357
| null | null |
Adaptive Linguistic Prompting (ALP) Enhances Phishing Webpage Detection in Multimodal Large Language Models
|
Phishing attacks represent a significant cybersecurity threat, necessitating adaptive detection techniques. This study explores few-shot Adaptive Linguistic Prompting (ALP) in detecting phishing webpages through the multimodal capabilities of state-of-the-art large language models (LLMs) such as GPT-4o and Gemini 1.5 Pro. ALP is a structured semantic reasoning method that guides LLMs to analyze textual deception by breaking down linguistic patterns, detecting urgency cues, and identifying manipulative diction commonly found in phishing content. By integrating textual, visual, and URL-based analysis, we propose a unified model capable of identifying sophisticated phishing attempts. Our experiments demonstrate that ALP significantly enhances phishing detection accuracy by guiding LLMs through structured reasoning and contextual analysis. The findings highlight the potential of ALP-integrated multimodal LLMs to advance phishing detection frameworks, achieving an F1-score of 0.93, surpassing traditional approaches. These results establish a foundation for more robust, interpretable, and adaptive linguistic-based phishing detection systems using LLMs.
| null |
https://arxiv.org/abs/2507.13357v1
|
https://arxiv.org/pdf/2507.13357v1.pdf
| null |
[
"Atharva Bhargude",
"Ishan Gonehal",
"Chandler Haney",
"Dave Yoon",
"Kevin Zhu",
"Aaron Sandoval",
"Sean O'Brien",
"Kaustubh Vinnakota"
] |
[] | 2025-06-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/teach-yolo-to-remember-a-self-distillation
|
2503.04688
| null | null |
Teach YOLO to Remember: A Self-Distillation Approach for Continual Object Detection
|
Real-time object detectors like YOLO achieve exceptional performance when trained on large datasets for multiple epochs. However, in real-world scenarios where data arrives incrementally, neural networks suffer from catastrophic forgetting, leading to a loss of previously learned knowledge. To address this, prior research has explored strategies for Class Incremental Learning (CIL) in Continual Learning for Object Detection (CLOD), with most approaches focusing on two-stage object detectors. However, existing work suggests that Learning without Forgetting (LwF) may be ineffective for one-stage anchor-free detectors like YOLO due to noisy regression outputs, which risk transferring corrupted knowledge. In this work, we introduce YOLO LwF, a self-distillation approach tailored for YOLO-based continual object detection. We demonstrate that when coupled with a replay memory, YOLO LwF significantly mitigates forgetting. Compared to previous approaches, it achieves state-of-the-art performance, improving mAP by +2.1% and +2.9% on the VOC and COCO benchmarks, respectively.
| null |
https://arxiv.org/abs/2503.04688v1
|
https://arxiv.org/pdf/2503.04688v1.pdf
| null |
[
"Riccardo De Monte",
"Davide Dalle Pezze",
"Gian Antonio Susto"
] |
[
"class-incremental learning",
"Class Incremental Learning",
"Continual Learning",
"Incremental Learning",
"Object",
"object-detection",
"Object Detection"
] | 2025-03-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/smart-fault-detection-in-satellite-electrical
|
2507.14004
| null | null |
Smart fault detection in satellite electrical power system
|
This paper presents an new approach for detecting in the electrical power system of satellites operating in Low Earth Orbit (LEO) without an Attitude Determination and Control Subsystem (ADCS). Components of these systems are prone to faults, such as line-to-line faults in the photovoltaic subsystem, open circuits, and short circuits in the DC-to-DC converter, as well as ground faults in batteries. In the previous research has largely focused on detecting faults in each components, such as photovoltaic arrays or converter systems, therefore, has been limited attention given to whole electrical power system of satellite as a whole system. Our approach addresses this gap by utilizing a Multi-Layer Perceptron (MLP) neural network model, which leverages input data such as solar radiation and surface temperature to predict current and load outputs. These machine learning techniques that classifiy use different approaches like Principal Component Analysis (PCA) and K-Nearest Neighbors (KNN), to classify faults effectively. The model presented achieves over 99% accuracy in identifying faults across multiple subsystems, marking a notable advancement from previous approaches by offering a complete diagnostic solution for the entire satellite power system. This thorough method boosts system reliability and helps lower the chances of mission failure
| null |
https://arxiv.org/abs/2507.14004v1
|
https://arxiv.org/pdf/2507.14004v1.pdf
| null |
[
"Niloofar Nobahari",
"Alireza Rezaee"
] |
[
"Diagnostic",
"Fault Detection"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mitigating-object-hallucinations-via-sentence
|
2507.12455
| null | null |
Mitigating Object Hallucinations via Sentence-Level Early Intervention
|
Multimodal large language models (MLLMs) have revolutionized cross-modal understanding but continue to struggle with hallucinations - fabricated content contradicting visual inputs. Existing hallucination mitigation methods either incur prohibitive computational costs or introduce distribution mismatches between training data and model outputs. We identify a critical insight: hallucinations predominantly emerge at the early stages of text generation and propagate through subsequent outputs. To address this, we propose **SENTINEL** (**S**entence-level **E**arly i**N**tervention **T**hrough **IN**-domain pr**E**ference **L**earning), a framework that eliminates dependency on human annotations. Specifically, we first bootstrap high-quality in-domain preference pairs by iteratively sampling model outputs, validating object existence through cross-checking with two open-vocabulary detectors, and classifying sentences into hallucinated/non-hallucinated categories. Subsequently, we use context-coherent positive samples and hallucinated negative samples to build context-aware preference data iteratively. Finally, we train models using a context-aware preference loss (C-DPO) that emphasizes discriminative learning at the sentence level where hallucinations initially manifest. Experimental results show that SENTINEL can reduce hallucinations by over 90\% compared to the original model and outperforms the previous state-of-the-art method on both hallucination benchmarks and general capabilities benchmarks, demonstrating its superiority and generalization ability. The models, datasets, and code are available at https://github.com/pspdada/SENTINEL.
|
Multimodal large language models (MLLMs) have revolutionized cross-modal understanding but continue to struggle with hallucinations - fabricated content contradicting visual inputs.
|
https://arxiv.org/abs/2507.12455v1
|
https://arxiv.org/pdf/2507.12455v1.pdf
| null |
[
"Shangpin Peng",
"Senqiao Yang",
"Li Jiang",
"Zhuotao Tian"
] |
[
"Hallucination",
"MM-Vet",
"Sentence",
"Text Generation",
"TextVQA"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/benchmarking-histopathology-foundation-models-1
|
2506.18668
| null | null |
Benchmarking histopathology foundation models in a multi-center dataset for skin cancer subtyping
|
Pretraining on large-scale, in-domain datasets grants histopathology foundation models (FM) the ability to learn task-agnostic data representations, enhancing transfer learning on downstream tasks. In computational pathology, automated whole slide image analysis requires multiple instance learning (MIL) frameworks due to the gigapixel scale of the slides. The diversity among histopathology FMs has highlighted the need to design real-world challenges for evaluating their effectiveness. To bridge this gap, our work presents a novel benchmark for evaluating histopathology FMs as patch-level feature extractors within a MIL classification framework. For that purpose, we leverage the AI4SkIN dataset, a multi-center cohort encompassing slides with challenging cutaneous spindle cell neoplasm subtypes. We also define the Foundation Model - Silhouette Index (FM-SI), a novel metric to measure model consistency against distribution shifts. Our experimentation shows that extracting less biased features enhances classification performance, especially in similarity-based MIL classifiers.
|
Pretraining on large-scale, in-domain datasets grants histopathology foundation models (FM) the ability to learn task-agnostic data representations, enhancing transfer learning on downstream tasks.
|
https://arxiv.org/abs/2506.18668v1
|
https://arxiv.org/pdf/2506.18668v1.pdf
| null |
[
"Pablo Meseguer",
"Rocío del Amor",
"Valery Naranjo"
] |
[
"Benchmarking",
"Diversity",
"Multiple Instance Learning",
"Transfer Learning"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nohumansrequired-autonomous-high-quality
|
2507.14119
| null | null |
NoHumansRequired: Autonomous High-Quality Image Editing Triplet Mining
|
Recent advances in generative modeling enable image editing assistants that follow natural language instructions without additional user input. Their supervised training requires millions of triplets: original image, instruction, edited image. Yet mining pixel-accurate examples is hard. Each edit must affect only prompt-specified regions, preserve stylistic coherence, respect physical plausibility, and retain visual appeal. The lack of robust automated edit-quality metrics hinders reliable automation at scale. We present an automated, modular pipeline that mines high-fidelity triplets across domains, resolutions, instruction complexities, and styles. Built on public generative models and running without human intervention, our system uses a task-tuned Gemini validator to score instruction adherence and aesthetics directly, removing any need for segmentation or grounding models. Inversion and compositional bootstrapping enlarge the mined set by approximately 2.2x, enabling large-scale high-fidelity training data. By automating the most repetitive annotation steps, the approach allows a new scale of training without human labeling effort. To democratize research in this resource-intensive area, we release NHR-Edit: an open dataset of 358k high-quality triplets. In the largest cross-dataset evaluation, it surpasses all public alternatives. We also release Bagel-NHR-Edit, an open-source fine-tuned Bagel model, which achieves state-of-the-art metrics in our experiments.
| null |
https://arxiv.org/abs/2507.14119v1
|
https://arxiv.org/pdf/2507.14119v1.pdf
| null |
[
"Maksim Kuprashevich",
"Grigorii Alekseenko",
"Irina Tolstykh",
"Georgii Fedorov",
"Bulat Suleimanov",
"Vladimir Dokholyan",
"Aleksandr Gordeev"
] |
[
"Image Editing",
"Text-based Image Editing",
"Triplet"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lets-forecast-learning-embedology-for-time
|
2506.06454
| null | null |
LETS Forecast: Learning Embedology for Time Series Forecasting
|
Real-world time series are often governed by complex nonlinear dynamics. Understanding these underlying dynamics is crucial for precise future prediction. While deep learning has achieved major success in time series forecasting, many existing approaches do not explicitly model the dynamics. To bridge this gap, we introduce DeepEDM, a framework that integrates nonlinear dynamical systems modeling with deep neural networks. Inspired by empirical dynamic modeling (EDM) and rooted in Takens' theorem, DeepEDM presents a novel deep model that learns a latent space from time-delayed embeddings, and employs kernel regression to approximate the underlying dynamics, while leveraging efficient implementation of softmax attention and allowing for accurate prediction of future time steps. To evaluate our method, we conduct comprehensive experiments on synthetic data of nonlinear dynamical systems as well as real-world time series across domains. Our results show that DeepEDM is robust to input noise, and outperforms state-of-the-art methods in forecasting accuracy. Our code is available at: https://abrarmajeedi.github.io/deep_edm.
|
Inspired by empirical dynamic modeling (EDM) and rooted in Takens' theorem, DeepEDM presents a novel deep model that learns a latent space from time-delayed embeddings, and employs kernel regression to approximate the underlying dynamics, while leveraging efficient implementation of softmax attention and allowing for accurate prediction of future time steps.
|
https://arxiv.org/abs/2506.06454v1
|
https://arxiv.org/pdf/2506.06454v1.pdf
| null |
[
"Abrar Majeedi",
"Viswanatha Reddy Gajjala",
"Satya Sai Srinath Namburi GNVV",
"Nada Magdi Elkordi",
"Yin Li"
] |
[
"Future prediction",
"Time Series",
"Time Series Forecasting"
] | 2025-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-data-ensemble-based-approach-for-sample
|
2506.23716
| null | null |
A Data-Ensemble-Based Approach for Sample-Efficient LQ Control of Linear Time-Varying Systems
|
This paper presents a sample-efficient, data-driven control framework for finite-horizon linear quadratic (LQ) control of linear time-varying (LTV) systems. In contrast to the time-invariant case, the time-varying LQ problem involves a differential Riccati equation (DRE) with time-dependent parameters and terminal boundary constraints. We formulate the LQ problem as a nonconvex optimization problem and conduct a rigorous analysis of its dual structure. By exploiting the inherent convexity of the dual problem and analyzing the KKT conditions, we derive an explicit relationship between the optimal dual solution and the parameters of the associated Q-function in time-varying case. This theoretical insight supports the development of a novel, sample-efficient, non-iterative semidefinite programming (SDP) algorithm that directly computes the optimal sequence of feedback gains from an ensemble of input-state data sequences without model identification. The resulting convex, data-dependent framework provides global optimality guarantees for completely unknown LTV systems. As a special case, the method also applies to finite-horizon LQ control of linear time-invariant (LTI) systems. In this setting, a single input-state trajectory suffices to identify the optimal LQ feedback policy, improving significantly over existing Q-learning approaches for finite horizon LTI systems that typically require data from multiple episodes. The approach provides a new optimization-based perspective on Q-learning in time-varying settings and contributes to the broader understanding of data-driven control in non-stationary environments. Simulation results show that, compared to recent methods, the proposed approach achieves superior optimality and sample efficiency on LTV systems, and indicates potential for stabilizing and optimal control of nonlinear systems.
| null |
https://arxiv.org/abs/2506.23716v1
|
https://arxiv.org/pdf/2506.23716v1.pdf
| null |
[
"Sahel Vahedi Noori",
"Maryam Babazadeh"
] |
[
"Q-Learning"
] | 2025-06-30T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mgvq-could-vq-vae-beat-vae-a-generalizable-1
|
2507.07997
| null | null |
MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization
|
Vector Quantized Variational Autoencoders (VQ-VAEs) are fundamental models that compress continuous visual data into discrete tokens. Existing methods have tried to improve the quantization strategy for better reconstruction quality, however, there still exists a large gap between VQ-VAEs and VAEs. To narrow this gap, we propose MGVQ, a novel method to augment the representation capability of discrete codebooks, facilitating easier optimization for codebooks and minimizing information loss, thereby enhancing reconstruction quality. Specifically, we propose to retain the latent dimension to preserve encoded features and incorporate a set of sub-codebooks for quantization. Furthermore, we construct comprehensive zero-shot benchmarks featuring resolutions of 512p and 2k to evaluate the reconstruction performance of existing methods rigorously. MGVQ achieves the state-of-the-art performance on both ImageNet and 8 zero-shot benchmarks across all VQ-VAEs. Notably, compared with SD-VAE, we outperform them on ImageNet significantly, with rFID 0.49 v.s. 0.91, and achieve superior PSNR on all zero-shot benchmarks. These results highlight the superiority of MGVQ in reconstruction and pave the way for preserving fidelity in HD image processing tasks. Code will be publicly available at https://github.com/MKJia/MGVQ.
| null |
https://arxiv.org/abs/2507.07997v2
|
https://arxiv.org/pdf/2507.07997v2.pdf
| null |
[
"Mingkai Jia",
"Wei Yin",
"Xiaotao Hu",
"Jiaxin Guo",
"Xiaoyang Guo",
"Qian Zhang",
"Xiao-Xiao Long",
"Ping Tan"
] |
[
"2k",
"Quantization"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/materialfusion-high-quality-zero-shot-and
|
2502.06606
| null | null |
MaterialFusion: High-Quality, Zero-Shot, and Controllable Material Transfer with Diffusion Models
|
Manipulating the material appearance of objects in images is critical for applications like augmented reality, virtual prototyping, and digital content creation. We present MaterialFusion, a novel framework for high-quality material transfer that allows users to adjust the degree of material application, achieving an optimal balance between new material properties and the object's original features. MaterialFusion seamlessly integrates the modified object into the scene by maintaining background consistency and mitigating boundary artifacts. To thoroughly evaluate our approach, we have compiled a dataset of real-world material transfer examples and conducted complex comparative analyses. Through comprehensive quantitative evaluations and user studies, we demonstrate that MaterialFusion significantly outperforms existing methods in terms of quality, user control, and background preservation. Code is available at https://github.com/ControlGenAI/MaterialFusion.
| null |
https://arxiv.org/abs/2502.06606v2
|
https://arxiv.org/pdf/2502.06606v2.pdf
| null |
[
"Kamil Garifullin",
"Maxim Nikolaev",
"Andrey Kuznetsov",
"Aibek Alanov"
] |
[] | 2025-02-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/be-the-change-you-want-to-see-revisiting-1
|
2507.03367
| null | null |
Be the Change You Want to See: Revisiting Remote Sensing Change Detection Practices
|
Remote sensing change detection aims to localize semantic changes between images of the same location captured at different times. In the past few years, newer methods have attributed enhanced performance to the additions of new and complex components to existing architectures. Most fail to measure the performance contribution of fundamental design choices such as backbone selection, pre-training strategies, and training configurations. We claim that such fundamental design choices often improve performance even more significantly than the addition of new architectural components. Due to that, we systematically revisit the design space of change detection models and analyse the full potential of a well-optimised baseline. We identify a set of fundamental design choices that benefit both new and existing architectures. Leveraging this insight, we demonstrate that when carefully designed, even an architecturally simple model can match or surpass state-of-the-art performance on six challenging change detection datasets. Our best practices generalise beyond our architecture and also offer performance improvements when applied to related methods, indicating that the space of fundamental design choices has been underexplored. Our guidelines and architecture provide a strong foundation for future methods, emphasizing that optimizing core components is just as important as architectural novelty in advancing change detection performance. Code: https://github.com/blaz-r/BTC-change-detection
| null |
https://arxiv.org/abs/2507.03367v1
|
https://arxiv.org/pdf/2507.03367v1.pdf
| null |
[
"Blaž Rolih",
"Matic Fučka",
"Filip Wolf",
"Luka Čehovin Zajc"
] |
[
"Change Detection"
] | 2025-07-04T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/agentsnet-coordination-and-collaborative
|
2507.08616
| null | null |
AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs
|
Large-language models (LLMs) have demonstrated powerful problem-solving capabilities, in particular when organized in multi-agent systems. However, the advent of such systems also raises several questions on the ability of a complex network of agents to effectively self-organize and collaborate. While measuring performance on standard reasoning benchmarks indicates how well multi-agent systems can solve reasoning tasks, it is unclear whether these systems are able to leverage their topology effectively. Here, we propose AgentsNet, a new benchmark for multi-agent reasoning. By drawing inspiration from classical problems in distributed systems and graph theory, AgentsNet measures the ability of multi-agent systems to collaboratively form strategies for problem-solving, self-organization, and effective communication given a network topology. We evaluate a variety of baseline methods on AgentsNet including homogeneous networks of agents which first have to agree on basic protocols for organization and communication. We find that some frontier LLMs are already demonstrating strong performance for small networks but begin to fall off once the size of the network scales. While existing multi-agent benchmarks cover at most 2-5 agents, AgentsNet is practically unlimited in size and can scale with new generations of LLMs. As such, we also probe frontier models in a setup with up to 100 agents.
| null |
https://arxiv.org/abs/2507.08616v1
|
https://arxiv.org/pdf/2507.08616v1.pdf
| null |
[
"Florian Grötschla",
"Luis Müller",
"Jan Tönshoff",
"Mikhail Galkin",
"Bryan Perozzi"
] |
[] | 2025-07-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/arctic-inference-with-shift-parallelism-fast
|
2507.11830
| null | null |
Arctic Inference with Shift Parallelism: Fast and Efficient Open Source Inference System for Enterprise AI
|
Inference is now the dominant AI workload, yet existing systems force trade-offs between latency, throughput, and cost. Arctic Inference, an open-source vLLM plugin from Snowflake AI Research, introduces Shift Parallelism, a dynamic parallelism strategy that adapts to real-world traffic while integrating speculative decoding, SwiftKV compute reduction, and optimized embedding inference. It achieves up to 3.4 times faster request completion, 1.75 times faster generation, and 1.6M tokens/sec per GPU for embeddings, outperforming both latency- and throughput-optimized deployments. Already powering Snowflake Cortex AI, Arctic Inference delivers state-of-the-art, cost-effective inference for enterprise AI and is now available to the community.
| null |
https://arxiv.org/abs/2507.11830v1
|
https://arxiv.org/pdf/2507.11830v1.pdf
| null |
[
"Samyam Rajbhandari",
"Mert Hidayetoglu",
"Aurick Qiao",
"Ye Wang",
"Juncheng Yang",
"Jeff Rasley",
"Michael Wyatt",
"Yuxiong He"
] |
[
"GPU"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/exploring-contextual-attribute-density-in-1
|
2503.12460
| null | null |
Exploring Contextual Attribute Density in Referring Expression Counting
|
Referring expression counting (REC) algorithms are for more flexible and interactive counting ability across varied fine-grained text expressions. However, the requirement for fine-grained attribute understanding poses challenges for prior arts, as they struggle to accurately align attribute information with correct visual patterns. Given the proven importance of ''visual density'', it is presumed that the limitations of current REC approaches stem from an under-exploration of ''contextual attribute density'' (CAD). In the scope of REC, we define CAD as the measure of the information intensity of one certain fine-grained attribute in visual regions. To model the CAD, we propose a U-shape CAD estimator in which referring expression and multi-scale visual features from GroundingDINO can interact with each other. With additional density supervision, we can effectively encode CAD, which is subsequently decoded via a novel attention procedure with CAD-refined queries. Integrating all these contributions, our framework significantly outperforms state-of-the-art REC methods, achieves $30\%$ error reduction in counting metrics and a $10\%$ improvement in localization accuracy. The surprising results shed light on the significance of contextual attribute density for REC. Code will be at github.com/Xu3XiWang/CAD-GD.
| null |
https://arxiv.org/abs/2503.12460v1
|
https://arxiv.org/pdf/2503.12460v1.pdf
| null |
[
"Zhicheng Wang",
"Zhiyu Pan",
"Zhan Peng",
"Jian Cheng",
"Liwen Xiao",
"Wei Jiang",
"Zhiguo Cao"
] |
[
"Attribute",
"Referring Expression"
] | 2025-03-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visionthink-smart-and-efficient-vision
|
2507.13348
| null | null |
VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
|
Recent advancements in vision-language models (VLMs) have improved performance by increasing the number of visual tokens, which are often significantly longer than text tokens. However, we observe that most real-world scenarios do not require such an extensive number of visual tokens. While the performance drops significantly in a small subset of OCR-related tasks, models still perform accurately in most other general VQA tasks with only 1/4 resolution. Therefore, we propose to dynamically process distinct samples with different resolutions, and present a new paradigm for visual token compression, namely, VisionThink. It starts with a downsampled image and smartly decides whether it is sufficient for problem solving. Otherwise, the model could output a special token to request the higher-resolution image. Compared to existing Efficient VLM methods that compress tokens using fixed pruning ratios or thresholds, VisionThink autonomously decides whether to compress tokens case by case. As a result, it demonstrates strong fine-grained visual understanding capability on OCR-related tasks, and meanwhile saves substantial visual tokens on simpler tasks. We adopt reinforcement learning and propose the LLM-as-Judge strategy to successfully apply RL to general VQA tasks. Moreover, we carefully design a reward function and penalty mechanism to achieve a stable and reasonable image resize call ratio. Extensive experiments demonstrate the superiority, efficiency, and effectiveness of our method. Our code is available at https://github.com/dvlab-research/VisionThink.
| null |
https://arxiv.org/abs/2507.13348v1
|
https://arxiv.org/pdf/2507.13348v1.pdf
| null |
[
"Senqiao Yang",
"Junyi Li",
"Xin Lai",
"Bei Yu",
"Hengshuang Zhao",
"Jiaya Jia"
] |
[
"Language Modeling",
"Language Modelling",
"Optical Character Recognition (OCR)",
"reinforcement-learning",
"Reinforcement Learning",
"Visual Question Answering (VQA)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/agents-llm-augmentative-generation-of
|
2507.13729
| null | null |
AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework
|
Rare, yet critical, scenarios pose a significant challenge in testing and evaluating autonomous driving planners. Relying solely on real-world driving scenes requires collecting massive datasets to capture these scenarios. While automatic generation of traffic scenarios appears promising, data-driven models require extensive training data and often lack fine-grained control over the output. Moreover, generating novel scenarios from scratch can introduce a distributional shift from the original training scenes which undermines the validity of evaluations especially for learning-based planners. To sidestep this, recent work proposes to generate challenging scenarios by augmenting original scenarios from the test set. However, this involves the manual augmentation of scenarios by domain experts. An approach that is unable to meet the demands for scale in the evaluation of self-driving systems. Therefore, this paper introduces a novel LLM-agent based framework for augmenting real-world traffic scenarios using natural language descriptions, addressing the limitations of existing methods. A key innovation is the use of an agentic design, enabling fine-grained control over the output and maintaining high performance even with smaller, cost-effective LLMs. Extensive human expert evaluation demonstrates our framework's ability to accurately adhere to user intent, generating high quality augmented scenarios comparable to those created manually.
| null |
https://arxiv.org/abs/2507.13729v1
|
https://arxiv.org/pdf/2507.13729v1.pdf
| null |
[
"Yu Yao",
"Salil Bhatnagar",
"Markus Mazzola",
"Vasileios Belagiannis",
"Igor Gilitschenski",
"Luigi Palmieri",
"Simon Razniewski",
"Marcel Hallgarten"
] |
[
"Autonomous Driving"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ctrls-chain-of-thought-reasoning-via-latent
|
2507.08182
| null | null |
CTRLS: Chain-of-Thought Reasoning via Latent State-Transition
|
Chain-of-thought (CoT) reasoning enables large language models (LLMs) to break down complex problems into interpretable intermediate steps, significantly enhancing model transparency and performance in reasoning tasks. However, conventional CoT methods rely on heuristic sampling without structured modeling of reasoning transitions, constraining their ability to systematically explore and discover diverse and effective reasoning trajectories. In this work, we introduce CTRLS, a framework that formulates CoT reasoning as a Markov decision process (MDP) with latent state transitions, enabling principled and state-aware exploration via distributional reinforcement learning. By modelling reasoning actions as explicit probability distributions in latent space, our approach explicitly models epistemic uncertainty, facilitating robust exploration of the reasoning space. As part of our framework, we introduce an on-policy reinforcement learning strategy incorporating epsilon-greedy exploration and entropy-based regularization to iteratively refine latent state transitions without requiring additional fine-tuning of the underlying LLM. Theoretical analyses provide evidence lower bounds (ELBO), theoretically grounding our transition-aware modeling of latent reasoning dynamics. Further experiments demonstrate improvements in reasoning accuracy, diversity, and exploration efficiency across benchmark reasoning tasks.
| null |
https://arxiv.org/abs/2507.08182v1
|
https://arxiv.org/pdf/2507.08182v1.pdf
| null |
[
"Junda Wu",
"Yuxin Xiong",
"Xintong Li",
"Zhengmian Hu",
"Tong Yu",
"Rui Wang",
"Xiang Chen",
"Jingbo Shang",
"Julian McAuley"
] |
[
"Distributional Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/replacing-thinking-with-tool-usage-enables
|
2507.05065
| null | null |
Replacing thinking with tool usage enables reasoning in small language models
|
Recent advances have established a new machine learning paradigm based on scaling up compute at inference time as well as at training time. In that line of work, a combination of Supervised Fine-Tuning (SFT) on synthetic demonstrations and Reinforcement Learning with Verifiable Rewards (RLVR) is used for training Large Language Models to expend extra compute during inference in the form of "thoughts" expressed in natural language. In this paper, we propose to instead format these tokens as a multi-turn interaction trace with a stateful tool. At each turn, the new state of the tool is appended to the context of the model, whose job is to generate the tokens necessary to control the tool via a custom DSL. We benchmark this approach on the problem of repairing malfunctioning Python code, and show that this constrained setup allows for faster sampling of experience and a denser reward signal, allowing even models of size up to 3B parameters to learn how to proficiently expend additional compute on the task.
| null |
https://arxiv.org/abs/2507.05065v1
|
https://arxiv.org/pdf/2507.05065v1.pdf
| null |
[
"Corrado Rainone",
"Tim Bakker",
"Roland Memisevic"
] |
[] | 2025-07-07T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geometry-aware-4d-video-generation-for-robot
|
2507.01099
| null | null |
Geometry-aware 4D Video Generation for Robot Manipulation
|
Understanding and predicting the dynamics of the physical world can enhance a robot's ability to plan and interact effectively in complex environments. While recent video generation models have shown strong potential in modeling dynamic scenes, generating videos that are both temporally coherent and geometrically consistent across camera views remains a significant challenge. To address this, we propose a 4D video generation model that enforces multi-view 3D consistency of videos by supervising the model with cross-view pointmap alignment during training. This geometric supervision enables the model to learn a shared 3D representation of the scene, allowing it to predict future video sequences from novel viewpoints based solely on the given RGB-D observations, without requiring camera poses as inputs. Compared to existing baselines, our method produces more visually stable and spatially aligned predictions across multiple simulated and real-world robotic datasets. We further show that the predicted 4D videos can be used to recover robot end-effector trajectories using an off-the-shelf 6DoF pose tracker, supporting robust robot manipulation and generalization to novel camera viewpoints.
| null |
https://arxiv.org/abs/2507.01099v1
|
https://arxiv.org/pdf/2507.01099v1.pdf
| null |
[
"Zeyi Liu",
"Shuang Li",
"Eric Cousineau",
"Siyuan Feng",
"Benjamin Burchfiel",
"Shuran Song"
] |
[
"Robot Manipulation",
"Video Generation"
] | 2025-07-01T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rag-6dpose-retrieval-augmented-6d-pose
|
2506.18856
| null | null |
RAG-6DPose: Retrieval-Augmented 6D Pose Estimation via Leveraging CAD as Knowledge Base
|
Accurate 6D pose estimation is key for robotic manipulation, enabling precise object localization for tasks like grasping. We present RAG-6DPose, a retrieval-augmented approach that leverages 3D CAD models as a knowledge base by integrating both visual and geometric cues. Our RAG-6DPose roughly contains three stages: 1) Building a Multi-Modal CAD Knowledge Base by extracting 2D visual features from multi-view CAD rendered images and also attaching 3D points; 2) Retrieving relevant CAD features from the knowledge base based on the current query image via our ReSPC module; and 3) Incorporating retrieved CAD information to refine pose predictions via retrieval-augmented decoding. Experimental results on standard benchmarks and real-world robotic tasks demonstrate the effectiveness and robustness of our approach, particularly in handling occlusions and novel viewpoints. Supplementary material is available on our project website: https://sressers.github.io/RAG-6DPose .
| null |
https://arxiv.org/abs/2506.18856v1
|
https://arxiv.org/pdf/2506.18856v1.pdf
| null |
[
"Kuanning Wang",
"Yuqian Fu",
"Tianyu Wang",
"Yanwei Fu",
"Longfei Liang",
"Yu-Gang Jiang",
"xiangyang xue"
] |
[
"6D Pose Estimation",
"Object Localization",
"Pose Estimation",
"RAG",
"Retrieval"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/snapmogen-human-motion-generation-from
|
2507.09122
| null | null |
SnapMoGen: Human Motion Generation from Expressive Texts
|
Text-to-motion generation has experienced remarkable progress in recent years. However, current approaches remain limited to synthesizing motion from short or general text prompts, primarily due to dataset constraints. This limitation undermines fine-grained controllability and generalization to unseen prompts. In this paper, we introduce SnapMoGen, a new text-motion dataset featuring high-quality motion capture data paired with accurate, expressive textual annotations. The dataset comprises 20K motion clips totaling 44 hours, accompanied by 122K detailed textual descriptions averaging 48 words per description (vs. 12 words of HumanML3D). Importantly, these motion clips preserve original temporal continuity as they were in long sequences, facilitating research in long-term motion generation and blending. We also improve upon previous generative masked modeling approaches. Our model, MoMask++, transforms motion into multi-scale token sequences that better exploit the token capacity, and learns to generate all tokens using a single generative masked transformer. MoMask++ achieves state-of-the-art performance on both HumanML3D and SnapMoGen benchmarks. Additionally, we demonstrate the ability to process casual user prompts by employing an LLM to reformat inputs to align with the expressivity and narration style of SnapMoGen. Project webpage: https://snap-research.github.io/SnapMoGen/
| null |
https://arxiv.org/abs/2507.09122v1
|
https://arxiv.org/pdf/2507.09122v1.pdf
| null |
[
"Chuan Guo",
"Inwoo Hwang",
"Jian Wang",
"Bing Zhou"
] |
[
"Motion Generation"
] | 2025-07-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-privacy-preserving-and-personalized
|
2507.08878
| null | null |
Towards Privacy-Preserving and Personalized Smart Homes via Tailored Small Language Models
|
Large Language Models (LLMs) have showcased remarkable generalizability in language comprehension and hold significant potential to revolutionize human-computer interaction in smart homes. Existing LLM-based smart home assistants typically transmit user commands, along with user profiles and home configurations, to remote servers to obtain personalized services. However, users are increasingly concerned about the potential privacy leaks to the remote servers. To address this issue, we develop HomeLLaMA, an on-device assistant for privacy-preserving and personalized smart home serving with a tailored small language model (SLM). HomeLLaMA learns from cloud LLMs to deliver satisfactory responses and enable user-friendly interactions. Once deployed, HomeLLaMA facilitates proactive interactions by continuously updating local SLMs and user profiles. To further enhance user experience while protecting their privacy, we develop PrivShield to offer an optional privacy-preserving LLM-based smart home serving for those users, who are unsatisfied with local responses and willing to send less-sensitive queries to remote servers. For evaluation, we build a comprehensive benchmark DevFinder to assess the service quality. Extensive experiments and user studies (M=100) demonstrate that HomeLLaMA can provide personalized services while significantly enhancing user privacy.
| null |
https://arxiv.org/abs/2507.08878v1
|
https://arxiv.org/pdf/2507.08878v1.pdf
| null |
[
"Xinyu Huang",
"Leming Shen",
"Zijing Ma",
"Yuanqing Zheng"
] |
[
"Privacy Preserving",
"Small Language Model"
] | 2025-07-10T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/world-model-based-end-to-end-scene-generation
|
2507.12762
| null | null |
World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving
|
Reliable anticipation of traffic accidents is essential for advancing autonomous driving systems. However, this objective is limited by two fundamental challenges: the scarcity of diverse, high-quality training data and the frequent absence of crucial object-level cues due to environmental disruptions or sensor deficiencies. To tackle these issues, we propose a comprehensive framework combining generative scene augmentation with adaptive temporal reasoning. Specifically, we develop a video generation pipeline that utilizes a world model guided by domain-informed prompts to create high-resolution, statistically consistent driving scenarios, particularly enriching the coverage of edge cases and complex interactions. In parallel, we construct a dynamic prediction model that encodes spatio-temporal relationships through strengthened graph convolutions and dilated temporal operators, effectively addressing data incompleteness and transient visual noise. Furthermore, we release a new benchmark dataset designed to better capture diverse real-world driving risks. Extensive experiments on public and newly released datasets confirm that our framework enhances both the accuracy and lead time of accident anticipation, offering a robust solution to current data and modeling limitations in safety-critical autonomous driving applications.
| null |
https://arxiv.org/abs/2507.12762v1
|
https://arxiv.org/pdf/2507.12762v1.pdf
| null |
[
"Yanchen Guan",
"Haicheng Liao",
"Chengyue Wang",
"Xingcheng Liu",
"Jiaxun Zhang",
"Zhenning Li"
] |
[
"Accident Anticipation",
"Autonomous Driving",
"Scene Generation",
"Video Generation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/orbis-overcoming-challenges-of-long-horizon
|
2507.13162
| null | null |
Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models
|
Existing world models for autonomous driving struggle with long-horizon generation and generalization to challenging scenarios. In this work, we develop a model using simple design choices, and without additional supervision or sensors, such as maps, depth, or multiple cameras. We show that our model yields state-of-the-art performance, despite having only 469M parameters and being trained on 280h of video data. It particularly stands out in difficult scenarios like turning maneuvers and urban traffic. We test whether discrete token models possibly have advantages over continuous models based on flow matching. To this end, we set up a hybrid tokenizer that is compatible with both approaches and allows for a side-by-side comparison. Our study concludes in favor of the continuous autoregressive model, which is less brittle on individual design choices and more powerful than the model built on discrete tokens. Code, models and qualitative results are publicly available at https://lmb-freiburg.github.io/orbis.github.io/.
| null |
https://arxiv.org/abs/2507.13162v1
|
https://arxiv.org/pdf/2507.13162v1.pdf
| null |
[
"Arian Mousakhan",
"Sudhanshu Mittal",
"Silvio Galesso",
"Karim Farid",
"Thomas Brox"
] |
[
"Autonomous Driving"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/founder-grounding-foundation-models-in-world
|
2507.12496
| null | null |
FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making
|
Foundation Models (FMs) and World Models (WMs) offer complementary strengths in task generalization at different levels. In this work, we propose FOUNDER, a framework that integrates the generalizable knowledge embedded in FMs with the dynamic modeling capabilities of WMs to enable open-ended task solving in embodied environments in a reward-free manner. We learn a mapping function that grounds FM representations in the WM state space, effectively inferring the agent's physical states in the world simulator from external observations. This mapping enables the learning of a goal-conditioned policy through imagination during behavior learning, with the mapped task serving as the goal state. Our method leverages the predicted temporal distance to the goal state as an informative reward signal. FOUNDER demonstrates superior performance on various multi-task offline visual control benchmarks, excelling in capturing the deep-level semantics of tasks specified by text or videos, particularly in scenarios involving complex observations or domain gaps where prior methods struggle. The consistency of our learned reward function with the ground-truth reward is also empirically validated. Our project website is https://sites.google.com/view/founder-rl.
| null |
https://arxiv.org/abs/2507.12496v1
|
https://arxiv.org/pdf/2507.12496v1.pdf
| null |
[
"Yucen Wang",
"Rui Yu",
"Shenghua Wan",
"Le Gan",
"De-Chuan Zhan"
] |
[
"Decision Making"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/continual-reinforcement-learning-by-planning
|
2507.09177
| null | null |
Continual Reinforcement Learning by Planning with Online World Models
|
Continual reinforcement learning (CRL) refers to a naturalistic setting where an agent needs to endlessly evolve, by trial and error, to solve multiple tasks that are presented sequentially. One of the largest obstacles to CRL is that the agent may forget how to solve previous tasks when learning a new task, known as catastrophic forgetting. In this paper, we propose to address this challenge by planning with online world models. Specifically, we learn a Follow-The-Leader shallow model online to capture the world dynamics, in which we plan using model predictive control to solve a set of tasks specified by any reward functions. The online world model is immune to forgetting by construction with a proven regret bound of $\mathcal{O}(\sqrt{K^2D\log(T)})$ under mild assumptions. The planner searches actions solely based on the latest online model, thus forming a FTL Online Agent (OA) that updates incrementally. To assess OA, we further design Continual Bench, a dedicated environment for CRL, and compare with several strong baselines under the same model-planning algorithmic framework. The empirical results show that OA learns continuously to solve new tasks while not forgetting old skills, outperforming agents built on deep world models with various continual learning techniques.
| null |
https://arxiv.org/abs/2507.09177v1
|
https://arxiv.org/pdf/2507.09177v1.pdf
| null |
[
"Zichen Liu",
"Guoji Fu",
"Chao Du",
"Wee Sun Lee",
"Min Lin"
] |
[
"Continual Learning",
"Model Predictive Control",
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-07-12T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mindjourney-test-time-scaling-with-world
|
2507.12508
| null | null |
MindJourney: Test-Time Scaling with World Models for Spatial Reasoning
|
Spatial reasoning in 3D space is central to human cognition and indispensable for embodied tasks such as navigation and manipulation. However, state-of-the-art vision-language models (VLMs) struggle frequently with tasks as simple as anticipating how a scene will look after an egocentric motion: they perceive 2D images but lack an internal model of 3D dynamics. We therefore propose MindJourney, a test-time scaling framework that grants a VLM with this missing capability by coupling it to a controllable world model based on video diffusion. The VLM iteratively sketches a concise camera trajectory, while the world model synthesizes the corresponding view at each step. The VLM then reasons over this multi-view evidence gathered during the interactive exploration. Without any fine-tuning, our MindJourney achieves over an average 8% performance boost on the representative spatial reasoning benchmark SAT, showing that pairing VLMs with world models for test-time scaling offers a simple, plug-and-play route to robust 3D reasoning. Meanwhile, our method also improves upon the test-time inference VLMs trained through reinforcement learning, which demonstrates the potential of our method that utilizes world models for test-time scaling.
| null |
https://arxiv.org/abs/2507.12508v1
|
https://arxiv.org/pdf/2507.12508v1.pdf
| null |
[
"Yuncong Yang",
"Jiageng Liu",
"Zheyuan Zhang",
"Siyuan Zhou",
"Reuben Tan",
"Jianwei Yang",
"Yilun Du",
"Chuang Gan"
] |
[
"Spatial Reasoning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/latent-policy-steering-with-embodiment
|
2507.13340
| null | null |
Latent Policy Steering with Embodiment-Agnostic Pretrained World Models
|
Learning visuomotor policies via imitation has proven effective across a wide range of robotic domains. However, the performance of these policies is heavily dependent on the number of training demonstrations, which requires expensive data collection in the real world. In this work, we aim to reduce data collection efforts when learning visuomotor robot policies by leveraging existing or cost-effective data from a wide range of embodiments, such as public robot datasets and the datasets of humans playing with objects (human data from play). Our approach leverages two key insights. First, we use optic flow as an embodiment-agnostic action representation to train a World Model (WM) across multi-embodiment datasets, and finetune it on a small amount of robot data from the target embodiment. Second, we develop a method, Latent Policy Steering (LPS), to improve the output of a behavior-cloned policy by searching in the latent space of the WM for better action sequences. In real world experiments, we observe significant improvements in the performance of policies trained with a small amount of data (over 50% relative improvement with 30 demonstrations and over 20% relative improvement with 50 demonstrations) by combining the policy with a WM pretrained on two thousand episodes sampled from the existing Open X-embodiment dataset across different robots or a cost-effective human dataset from play.
| null |
https://arxiv.org/abs/2507.13340v1
|
https://arxiv.org/pdf/2507.13340v1.pdf
| null |
[
"Yiqi Wang",
"Mrinal Verghese",
"Jeff Schneider"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-ai-search-paradigm
|
2506.17188
| null | null |
Towards AI Search Paradigm
|
In this paper, we introduce the AI Search Paradigm, a comprehensive blueprint for next-generation search systems capable of emulating human information processing and decision-making. The paradigm employs a modular architecture of four LLM-powered agents (Master, Planner, Executor and Writer) that dynamically adapt to the full spectrum of information needs, from simple factual queries to complex multi-stage reasoning tasks. These agents collaborate dynamically through coordinated workflows to evaluate query complexity, decompose problems into executable plans, and orchestrate tool usage, task execution, and content synthesis. We systematically present key methodologies for realizing this paradigm, including task planning and tool integration, execution strategies, aligned and robust retrieval-augmented generation, and efficient LLM inference, spanning both algorithmic techniques and infrastructure-level optimizations. By providing an in-depth guide to these foundational components, this work aims to inform the development of trustworthy, adaptive, and scalable AI search systems.
| null |
https://arxiv.org/abs/2506.17188v1
|
https://arxiv.org/pdf/2506.17188v1.pdf
| null |
[
"Yuchen Li",
"Hengyi Cai",
"Rui Kong",
"Xinran Chen",
"Jiamin Chen",
"Jun Yang",
"Haojie Zhang",
"Jiayi Li",
"Jiayi Wu",
"Yiqun Chen",
"Changle Qu",
"Keyi Kong",
"Wenwen Ye",
"Lixin Su",
"Xinyu Ma",
"Long Xia",
"Daiting Shi",
"Jiashu Zhao",
"Haoyi Xiong",
"Shuaiqiang Wang",
"Dawei Yin"
] |
[
"Decision Making",
"Retrieval-augmented Generation",
"Task Planning"
] | 2025-06-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rag-enhancing-retrieval-augmented-generation
|
2506.11555
| null | null |
RAG+: Enhancing Retrieval-Augmented Generation with Application-Aware Reasoning
|
The integration of external knowledge through Retrieval-Augmented Generation (RAG) has become foundational in enhancing large language models (LLMs) for knowledge-intensive tasks. However, existing RAG paradigms often overlook the cognitive step of applying knowledge, leaving a gap between retrieved facts and task-specific reasoning. In this work, we introduce RAG+, a principled and modular extension that explicitly incorporates application-aware reasoning into the RAG pipeline. RAG+ constructs a dual corpus consisting of knowledge and aligned application examples, created either manually or automatically, and retrieves both jointly during inference. This design enables LLMs not only to access relevant information but also to apply it within structured, goal-oriented reasoning processes. Experiments across mathematical, legal, and medical domains, conducted on multiple models, demonstrate that RAG+ consistently outperforms standard RAG variants, achieving average improvements of 3-5%, and peak gains up to 7.5% in complex scenarios. By bridging retrieval with actionable application, RAG+ advances a more cognitively grounded framework for knowledge integration, representing a step toward more interpretable and capable LLMs.
| null |
https://arxiv.org/abs/2506.11555v3
|
https://arxiv.org/pdf/2506.11555v3.pdf
| null |
[
"Yu Wang",
"Shiwan Zhao",
"Zhihu Wang",
"Ming Fan",
"Yubo Zhang",
"Xicheng Zhang",
"Zhengfan Wang",
"Heyuan Huang",
"Ting Liu"
] |
[
"RAG",
"Retrieval",
"Retrieval-augmented Generation"
] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/eliciting-reasoning-in-language-models-with
|
2506.12115
| null | null |
Eliciting Reasoning in Language Models with Cognitive Tools
|
The recent advent of reasoning models like OpenAI's o1 was met with excited speculation by the AI community about the mechanisms underlying these capabilities in closed models, followed by a rush of replication efforts, particularly from the open source community. These speculations were largely settled by the demonstration from DeepSeek-R1 that chains-of-thought and reinforcement learning (RL) can effectively replicate reasoning on top of base LLMs. However, it remains valuable to explore alternative methods for theoretically eliciting reasoning that could help elucidate the underlying mechanisms, as well as providing additional methods that may offer complementary benefits. Here, we build on the long-standing literature in cognitive psychology and cognitive architectures, which postulates that reasoning arises from the orchestrated, sequential execution of a set of modular, predetermined cognitive operations. Crucially, we implement this key idea within a modern agentic tool-calling framework. In particular, we endow an LLM with a small set of "cognitive tools" encapsulating specific reasoning operations, each executed by the LLM itself. Surprisingly, this simple strategy results in considerable gains in performance on standard mathematical reasoning benchmarks compared to base LLMs, for both closed and open-weight models. For instance, providing our "cognitive tools" to GPT-4.1 increases its pass@1 performance on AIME2024 from 26.7% to 43.3%, bringing it very close to the performance of o1-preview. In addition to its practical implications, this demonstration contributes to the debate regarding the role of post-training methods in eliciting reasoning in LLMs versus the role of inherent capabilities acquired during pre-training, and whether post-training merely uncovers these latent abilities.
| null |
https://arxiv.org/abs/2506.12115v1
|
https://arxiv.org/pdf/2506.12115v1.pdf
| null |
[
"Brown Ebouky",
"Andrea Bartezzaghi",
"Mattia Rigotti"
] |
[
"Mathematical Reasoning",
"Reinforcement Learning (RL)"
] | 2025-06-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/text-to-lora-instant-transformer-adaption
|
2506.06105
| null | null |
Text-to-LoRA: Instant Transformer Adaption
|
While Foundation Models provide a general tool for rapid content creation, they regularly require task-specific adaptation. Traditionally, this exercise involves careful curation of datasets and repeated fine-tuning of the underlying model. Fine-tuning techniques enable practitioners to adapt foundation models for many new applications but require expensive and lengthy training while being notably sensitive to hyperparameter choices. To overcome these limitations, we introduce Text-to-LoRA (T2L), a model capable of adapting large language models (LLMs) on the fly solely based on a natural language description of the target task. T2L is a hypernetwork trained to construct LoRAs in a single inexpensive forward pass. After training T2L on a suite of 9 pre-trained LoRA adapters (GSM8K, Arc, etc.), we show that the ad-hoc reconstructed LoRA instances match the performance of task-specific adapters across the corresponding test sets. Furthermore, T2L can compress hundreds of LoRA instances and zero-shot generalize to entirely unseen tasks. This approach provides a significant step towards democratizing the specialization of foundation models and enables language-based adaptation with minimal compute requirements. Our code is available at https://github.com/SakanaAI/text-to-lora
| null |
https://arxiv.org/abs/2506.06105v2
|
https://arxiv.org/pdf/2506.06105v2.pdf
| null |
[
"Rujikorn Charakorn",
"Edoardo Cetin",
"Yujin Tang",
"Robert Tjarko Lange"
] |
[
"ARC",
"GSM8K"
] | 2025-06-06T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/knowledge-or-reasoning-a-close-look-at-how
|
2506.02126
| null | null |
Knowledge or Reasoning? A Close Look at How LLMs Think Across Domains
|
Recent advances in reasoning-enhanced Large Language Models such as OpenAI-o1/3 and DeepSeek-R1 have significantly improved performance on complex tasks. However, the quality and transparency of their internal reasoning processes remain underexplored. This work moves beyond the final-answer accuracy and investigates step-by-step reasoning in the medical and mathematical domains by explicitly decomposing the thinking trajectories into two parts: knowledge and reasoning. Specifically, we introduce a fine-grained evaluation framework that judges: (1) the correctness of knowledge used (measured by Knowledge Index (KI)) and (2) the quality of reasoning (measured by Information Gain (InfoGain)). Using this framework, we study R1-distilled and base Qwen models trained with supervised fine-tuning (SFT) and/or reinforcement learning (RL) in the medical and math domains. Three intriguing findings emerge: (1) The general reasoning abilities in R1-distilled models do not transfer effectively to the medical domain through either SFT or RL. (2) SFT raises final-answer accuracy in both domains, but often at the cost of reasoning quality: InfoGain drops by 38.9% on average compared with untrained models; In the medical domain, however, SFT remains crucial because domain knowledge is indispensable. (3) RL enhances medical reasoning by pruning inaccurate or irrelevant knowledge from reasoning paths, thereby improving both reasoning accuracy and knowledge correctness.
| null |
https://arxiv.org/abs/2506.02126v1
|
https://arxiv.org/pdf/2506.02126v1.pdf
| null |
[
"Juncheng Wu",
"Sheng Liu",
"Haoqin Tu",
"Hang Yu",
"Xiaoke Huang",
"James Zou",
"Cihang Xie",
"Yuyin Zhou"
] |
[
"Math",
"Reinforcement Learning (RL)"
] | 2025-06-02T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/memos-an-operating-system-for-memory
|
2505.22101
| null | null |
MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models
|
Large Language Models (LLMs) have emerged as foundational infrastructure in the pursuit of Artificial General Intelligence (AGI). Despite their remarkable capabilities in language perception and generation, current LLMs fundamentally lack a unified and structured architecture for handling memory. They primarily rely on parametric memory (knowledge encoded in model weights) and ephemeral activation memory (context-limited runtime states). While emerging methods like Retrieval-Augmented Generation (RAG) incorporate plaintext memory, they lack lifecycle management and multi-modal integration, limiting their capacity for long-term knowledge evolution. To address this, we introduce MemOS, a memory operating system designed for LLMs that, for the first time, elevates memory to a first-class operational resource. It builds unified mechanisms for representation, organization, and governance across three core memory types: parametric, activation, and plaintext. At its core is the MemCube, a standardized memory abstraction that enables tracking, fusion, and migration of heterogeneous memory, while offering structured, traceable access across tasks and contexts. MemOS establishes a memory-centric execution framework with strong controllability, adaptability, and evolvability. It fills a critical gap in current LLM infrastructure and lays the groundwork for continual adaptation, personalized intelligence, and cross-platform coordination in next-generation intelligent systems.
| null |
https://arxiv.org/abs/2505.22101v1
|
https://arxiv.org/pdf/2505.22101v1.pdf
| null |
[
"Zhiyu Li",
"Shichao Song",
"Hanyu Wang",
"Simin Niu",
"Ding Chen",
"Jiawei Yang",
"Chenyang Xi",
"Huayi Lai",
"Jihao Zhao",
"Yezhaohui Wang",
"Junpeng Ren",
"Zehao Lin",
"Jiahao Huo",
"Tianyi Chen",
"Kai Chen",
"Kehang Li",
"Zhiqiang Yin",
"Qingchen Yu",
"Bo Tang",
"Hongkang Yang",
"Zhi-Qin John Xu",
"Feiyu Xiong"
] |
[
"RAG",
"Retrieval-augmented Generation"
] | 2025-05-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/arc-agi-2-a-new-challenge-for-frontier-ai
|
2505.11831
| null | null |
ARC-AGI-2: A New Challenge for Frontier AI Reasoning Systems
|
The Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI), introduced in 2019, established a challenging benchmark for evaluating the general fluid intelligence of artificial systems via a set of unique, novel tasks only requiring minimal prior knowledge. While ARC-AGI has spurred significant research activity over the past five years, recent AI progress calls for benchmarks capable of finer-grained evaluation at higher levels of cognitive complexity. We introduce ARC-AGI-2, an upgraded version of the benchmark. ARC-AGI-2 preserves the input-output pair task format of its predecessor, ensuring continuity for researchers. It incorporates a newly curated and expanded set of tasks specifically designed to provide a more granular signal to assess abstract reasoning and problem-solving abilities at higher levels of fluid intelligence. To contextualize the difficulty and characteristics of ARC-AGI-2, we present extensive results from human testing, providing a robust baseline that highlights the benchmark's accessibility to human intelligence, yet difficulty for current AI systems. ARC-AGI-2 aims to serve as a next-generation tool for rigorously measuring progress towards more general and human-like AI capabilities.
| null |
https://arxiv.org/abs/2505.11831v1
|
https://arxiv.org/pdf/2505.11831v1.pdf
| null |
[
"Francois Chollet",
"Mike Knoop",
"Gregory Kamradt",
"Bryan Landers",
"Henry Pinkard"
] |
[
"ARC"
] | 2025-05-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/simulate-refocus-and-ensemble-an-attention
|
2507.12851
| null | null |
Simulate, Refocus and Ensemble: An Attention-Refocusing Scheme for Domain Generalization
|
Domain generalization (DG) aims to learn a model from source domains and apply it to unseen target domains with out-of-distribution data. Owing to CLIP's strong ability to encode semantic concepts, it has attracted increasing interest in domain generalization. However, CLIP often struggles to focus on task-relevant regions across domains, i.e., domain-invariant regions, resulting in suboptimal performance on unseen target domains. To address this challenge, we propose an attention-refocusing scheme, called Simulate, Refocus and Ensemble (SRE), which learns to reduce the domain shift by aligning the attention maps in CLIP via attention refocusing. SRE first simulates domain shifts by performing augmentation on the source data to generate simulated target domains. SRE then learns to reduce the domain shifts by refocusing the attention in CLIP between the source and simulated target domains. Finally, SRE utilizes ensemble learning to enhance the ability to capture domain-invariant attention maps between the source data and the simulated target data. Extensive experimental results on several datasets demonstrate that SRE generally achieves better results than state-of-the-art methods. The code is available at: https://github.com/bitPrincy/SRE-DG.
| null |
https://arxiv.org/abs/2507.12851v1
|
https://arxiv.org/pdf/2507.12851v1.pdf
| null |
[
"Ziyi Wang",
"Zhi Gao",
"Jin Chen",
"Qingjie Zhao",
"Xinxiao wu",
"Jiebo Luo"
] |
[
"Domain Generalization",
"Ensemble Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/glad-generalizable-tuning-for-vision-language
|
2507.13089
| null | null |
GLAD: Generalizable Tuning for Vision-Language Models
|
Pre-trained vision-language models, such as CLIP, show impressive zero-shot recognition ability and can be easily transferred to specific downstream tasks via prompt tuning, even with limited training data. However, existing prompt tuning methods face two main challenges: (1) In few-shot scenarios, data scarcity often leads to overfitting, making the model sensitive to changes in the input domain. (2) To mitigate overfitting, these methods typically rely on complex task-specific model architectures and sensitive hyperparameter tuning, severely restricting their general applicability. To address these issues, we propose a simpler and more general framework called GLAD (Generalizable LoRA tuning with RegulArized GraDient). We show that merely applying LoRA achieves performance in downstream tasks comparable to current state-of-the-art prompt-based methods. While LoRA is effective and easy to use, it remains susceptible to overfitting in few-shot learning scenarios. To mitigate this risk, we introduce a gradient-based regularization technique. This technique effectively steers the optimization trajectory, encouraging the model to find a more stable parameter region that is robust to variations in data distribution. Through extensive experiments conducted on 15 benchmark datasets, we demonstrate that GLAD outperforms previous tuning approaches in terms of base-to-novel class generalization, image domain generalization, and cross-dataset generalization. The code will be publicly available.
| null |
https://arxiv.org/abs/2507.13089v1
|
https://arxiv.org/pdf/2507.13089v1.pdf
| null |
[
"Yuqi Peng",
"Pengfei Wang",
"Jianzhuang Liu",
"Shifeng Chen"
] |
[
"Domain Generalization",
"Few-Shot Learning",
"Zero-Shot Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/x-fusion-introducing-new-modality-to-frozen
|
2504.20996
| null | null |
X-Fusion: Introducing New Modality to Frozen Large Language Models
|
We propose X-Fusion, a framework that extends pretrained Large Language Models (LLMs) for multimodal tasks while preserving their language capabilities. X-Fusion employs a dual-tower design with modality-specific weights, keeping the LLM's parameters frozen while integrating vision-specific information for both understanding and generation. Our experiments demonstrate that X-Fusion consistently outperforms alternative architectures on both image-to-text and text-to-image tasks. We find that incorporating understanding-focused data improves generation quality, reducing image data noise enhances overall performance, and feature alignment accelerates convergence for smaller models but has minimal impact on larger ones. Our findings provide valuable insights into building efficient unified multimodal models.
| null |
https://arxiv.org/abs/2504.20996v1
|
https://arxiv.org/pdf/2504.20996v1.pdf
| null |
[
"Sicheng Mo",
"Thao Nguyen",
"Xun Huang",
"Siddharth Srinivasan Iyer",
"Yijun Li",
"Yuchen Liu",
"Abhishek Tandon",
"Eli Shechtman",
"Krishna Kumar Singh",
"Yong Jae Lee",
"Bolei Zhou",
"Yuheng Li"
] |
[
"Image to text"
] | 2025-04-29T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/loopserve-an-adaptive-dual-phase-llm
|
2507.13681
| null | null |
LoopServe: An Adaptive Dual-phase LLM Inference Acceleration System for Multi-Turn Dialogues
|
Multi-turn dialogues are essential in many real-world applications of large language models, such as chatbots and virtual assistants. As conversation histories become longer, existing large language models face increasing computational and memory challenges, which hinder their ability to provide efficient and responsive interactions. Most current acceleration methods either compress the context or optimize key value caching, but they often rely on fixed or position-based heuristics that do not adapt well to the dynamic and unpredictable patterns found in actual multi-turn conversations. In this paper, we present LoopServe, an adaptive dual-phase inference acceleration framework for large language models in multi-turn dialogues. LoopServe introduces two main innovations. First, it performs online sparsification during the prefilling phase by dynamically selecting the most important parts of the attention matrix for each new input. Second, it uses progressive key value compression during decoding by adaptively maintaining a relevant and efficient cache based on the most recently generated output tokens. We also propose a \href{https://huggingface.co/datasets/TreeAILab/Multi-turn_Long-context_Benchmark_for_LLMs}{new benchmark} with eleven multi-turn datasets that reflect realistic query positions and conversational dependencies. Extensive experiments demonstrate that LoopServe consistently achieves superior effectiveness compared to existing baselines and significantly accelerates LLM inference across a wide range of long-context dialogue tasks.
| null |
https://arxiv.org/abs/2507.13681v1
|
https://arxiv.org/pdf/2507.13681v1.pdf
| null |
[
"Haoyang Li",
"Zhanchao Xu",
"Yiming Li",
"Xuejia Chen",
"Darian Li",
"Anxin Tian",
"Qingfa Xiao",
"Cheng Deng",
"Jun Wang",
"Qing Li",
"Lei Chen",
"Mingxuan Yuan"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ip2-entity-guided-interest-probing-for
|
2507.13622
| null | null |
IP2: Entity-Guided Interest Probing for Personalized News Recommendation
|
News recommender systems aim to provide personalized news reading experiences for users based on their reading history. Behavioral science studies suggest that screen-based news reading contains three successive steps: scanning, title reading, and then clicking. Adhering to these steps, we find that intra-news entity interest dominates the scanning stage, while the inter-news entity interest guides title reading and influences click decisions. Unfortunately, current methods overlook the unique utility of entities in news recommendation. To this end, we propose a novel method called IP2 to probe entity-guided reading interest at both intra- and inter-news levels. At the intra-news level, a Transformer-based entity encoder is devised to aggregate mentioned entities in the news title into one signature entity. Then, a signature entity-title contrastive pre-training is adopted to initialize entities with proper meanings using the news story context, which in the meantime facilitates us to probe for intra-news entity interest. As for the inter-news level, a dual tower user encoder is presented to capture inter-news reading interest from both the title meaning and entity sides. In addition to highlighting the contribution of inter-news entity guidance, a cross-tower attention link is adopted to calibrate title reading interest using inter-news entity interest, thus further aligning with real-world behavior. Extensive experiments on two real-world datasets demonstrate that our IP2 achieves state-of-the-art performance in news recommendation.
| null |
https://arxiv.org/abs/2507.13622v1
|
https://arxiv.org/pdf/2507.13622v1.pdf
| null |
[
"Youlin Wu",
"Yuanyuan Sun",
"Xiaokun Zhang",
"Haoxi Zhan",
"Bo Xu",
"Liang Yang",
"Hongfei Lin"
] |
[
"News Recommendation",
"Recommendation Systems"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/s-2m-2-scalable-stereo-matching-model-for
|
2507.13229
| null | null |
$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation
|
The pursuit of a generalizable stereo matching model, capable of performing across varying resolutions and disparity ranges without dataset-specific fine-tuning, has revealed a fundamental trade-off. Iterative local search methods achieve high scores on constrained benchmarks, but their core mechanism inherently limits the global consistency required for true generalization. On the other hand, global matching architectures, while theoretically more robust, have been historically rendered infeasible by prohibitive computational and memory costs. We resolve this dilemma with $S^2M^2$: a global matching architecture that achieves both state-of-the-art accuracy and high efficiency without relying on cost volume filtering or deep refinement stacks. Our design integrates a multi-resolution transformer for robust long-range correspondence, trained with a novel loss function that concentrates probability on feasible matches. This approach enables a more robust joint estimation of disparity, occlusion, and confidence. $S^2M^2$ establishes a new state of the art on the Middlebury v3 and ETH3D benchmarks, significantly outperforming prior methods across most metrics while reconstructing high-quality details with competitive efficiency.
| null |
https://arxiv.org/abs/2507.13229v1
|
https://arxiv.org/pdf/2507.13229v1.pdf
| null |
[
"Junhong Min",
"Youngpil Jeon",
"Jimin Kim",
"Minyong Choi"
] |
[
"Depth Estimation",
"Stereo Matching"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.