paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/beyond-scalars-zonotope-valued-utility-for
2507.05844
null
null
Beyond Scalars: Zonotope-Valued Utility for Representation of Multidimensional Incomplete Preferences
In this paper, I propose a new framework for representing multidimensional incomplete preferences through zonotope-valued utilities, addressing the shortcomings of traditional scalar and vector-based models in decision theory. Traditional approaches assign single numerical values to alternatives, failing to capture the complexity of preferences where alternatives remainmain incomparable due to conflicting criteria across multiple dimensions. Our method maps each alternative to a zonotope, a convex geometric object in \(\mathbb{R}^m\) formed by Minkowski sums of intervals, which encapsulates the multidimensional structure of preferences with mathematical rigor. The set-valued nature of these payoffs stems from multiple sources, including non-probabilistic uncertainty, such as imprecise utility evaluation due to incomplete information about criteria weights, and probabilistic uncertainty arising from stochastic decision environments. By decomposing preference relations into interval orders and utilizing an extended set difference operator, we establish a rigorous axiomatization that defines preference as one alternative's zonotope differing from another's within the non-negative orthant of \(\mathbb{R}^m\). This framework generalizes existing representations and provides a visually intuitive and theoretically robust tool for modeling trade-offs among each dimension, while preferences are incomparable.
null
https://arxiv.org/abs/2507.05844v2
https://arxiv.org/pdf/2507.05844v2.pdf
null
[ "Behrooz Moosavi Ramezanzadeh" ]
[]
2025-07-08T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/multi-scale-network-dynamics-and-systemic
2507.08065
null
null
Multi-Scale Network Dynamics and Systemic Risk: A Model Context Protocol Approach to Financial Markets
This paper introduces a novel framework for analyzing systemic risk in financial markets through multi-scale network dynamics using Model Context Protocol (MCP) for agent communication. We develop an integrated approach that combines transfer entropy networks, agent-based modeling, and wavelet decomposition to capture information flows across temporal scales implemented in the MCPFM (Model Context Protocol Financial Markets) R package. Our methodology enables heterogeneous financial agents including high-frequency traders, market makers, institutional investors, and regulators to communicate through structured protocols while maintaining realistic market microstructure. The empirical analysis demonstrates that our multi-scale approach reveals previously hidden systemic risk patterns, with the proposed systemic risk index achieving superior early warning capabilities compared to traditional measures. The framework provides new insights for macroprudential policy design and regulatory intervention strategies. The complete implementation is available as an open-source R package at https://github.com/avishekb9/MCPFM to facilitate reproducible research and practical applications.
This paper introduces a novel framework for analyzing systemic risk in financial markets through multi-scale network dynamics using Model Context Protocol (MCP) for agent communication.
https://arxiv.org/abs/2507.08065v1
https://arxiv.org/pdf/2507.08065v1.pdf
null
[ "Avishek Bhandari" ]
[]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/whisperkit-on-device-real-time-asr-with
2507.10860
null
null
WhisperKit: On-device Real-time ASR with Billion-Scale Transformers
Real-time Automatic Speech Recognition (ASR) is a fundamental building block for many commercial applications of ML, including live captioning, dictation, meeting transcriptions, and medical scribes. Accuracy and latency are the most important factors when companies select a system to deploy. We present WhisperKit, an optimized on-device inference system for real-time ASR that significantly outperforms leading cloud-based systems. We benchmark against server-side systems that deploy a diverse set of models, including a frontier model (OpenAI gpt-4o-transcribe), a proprietary model (Deepgram nova-3), and an open-source model (Fireworks large-v3-turbo).Our results show that WhisperKit matches the lowest latency at 0.46s while achieving the highest accuracy 2.2% WER. The optimizations behind the WhisperKit system are described in detail in this paper.
null
https://arxiv.org/abs/2507.10860v1
https://arxiv.org/pdf/2507.10860v1.pdf
null
[ "Atila Orhon", "Arda Okan", "Berkin Durmus", "Zach Nagengast", "Eduardo Pacheco" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "speech-recognition", "Speech Recognition" ]
2025-07-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/safe-finding-sparse-and-flat-minima-to
2506.06866
null
null
SAFE: Finding Sparse and Flat Minima to Improve Pruning
Sparsifying neural networks often suffers from seemingly inevitable performance degradation, and it remains challenging to restore the original performance despite much recent progress. Motivated by recent studies in robust optimization, we aim to tackle this problem by finding subnetworks that are both sparse and flat at the same time. Specifically, we formulate pruning as a sparsity-constrained optimization problem where flatness is encouraged as an objective. We solve it explicitly via an augmented Lagrange dual approach and extend it further by proposing a generalized projection operation, resulting in novel pruning methods called SAFE and its extension, SAFE$^+$. Extensive evaluations on standard image classification and language modeling tasks reveal that SAFE consistently yields sparse networks with improved generalization performance, which compares competitively to well-established baselines. In addition, SAFE demonstrates resilience to noisy data, making it well-suited for real-world conditions.
Sparsifying neural networks often suffers from seemingly inevitable performance degradation, and it remains challenging to restore the original performance despite much recent progress.
https://arxiv.org/abs/2506.06866v2
https://arxiv.org/pdf/2506.06866v2.pdf
null
[ "Dongyeop Lee", "Kwanhee Lee", "Jinseok Chung", "Namhoon Lee" ]
[ "image-classification", "Image Classification", "Language Modeling", "Language Modelling" ]
2025-06-07T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Efficient ConvNets", "source_url": "http://arxiv.org/abs/1608.08710v3" } ]
https://paperswithcode.com/paper/conformation-aware-structure-prediction-of
2507.09054
null
null
Conformation-Aware Structure Prediction of Antigen-Recognizing Immune Proteins
We introduce Ibex, a pan-immunoglobulin structure prediction model that achieves state-of-the-art accuracy in modeling the variable domains of antibodies, nanobodies, and T-cell receptors. Unlike previous approaches, Ibex explicitly distinguishes between bound and unbound protein conformations by training on labeled apo and holo structural pairs, enabling accurate prediction of both states at inference time. Using a comprehensive private dataset of high-resolution antibody structures, we demonstrate superior out-of-distribution performance compared to existing specialized and general protein structure prediction tools. Ibex combines the accuracy of cutting-edge models with significantly reduced computational requirements, providing a robust foundation for accelerating large molecule design and therapeutic development.
We introduce Ibex, a pan-immunoglobulin structure prediction model that achieves state-of-the-art accuracy in modeling the variable domains of antibodies, nanobodies, and T-cell receptors.
https://arxiv.org/abs/2507.09054v1
https://arxiv.org/pdf/2507.09054v1.pdf
null
[ "Frédéric A. Dreyer", "Jan Ludwiczak", "Karolis Martinkus", "Brennan Abanades", "Robert G. Alberstein", "Pan Kessel", "Pranav Rao", "Jae Hyeon Lee", "Richard Bonneau", "Andrew M. Watkins", "Franziska Seeger" ]
[ "Prediction", "Protein Structure Prediction" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/from-pixels-to-damage-severity-estimating
2507.02781
null
null
From Pixels to Damage Severity: Estimating Earthquake Impacts Using Semantic Segmentation of Social Media Images
In the aftermath of earthquakes, social media images have become a crucial resource for disaster reconnaissance, providing immediate insights into the extent of damage. Traditional approaches to damage severity assessment in post-earthquake social media images often rely on classification methods, which are inherently subjective and incapable of accounting for the varying extents of damage within an image. Addressing these limitations, this study proposes a novel approach by framing damage severity assessment as a semantic segmentation problem, aiming for a more objective analysis of damage in earthquake-affected areas. The methodology involves the construction of a segmented damage severity dataset, categorizing damage into three degrees: undamaged structures, damaged structures, and debris. Utilizing this dataset, the study fine-tunes a SegFormer model to generate damage severity segmentations for post-earthquake social media images. Furthermore, a new damage severity scoring system is introduced, quantifying damage by considering the varying degrees of damage across different areas within images, adjusted for depth estimation. The application of this approach allows for the quantification of damage severity in social media images in a more objective and comprehensive manner. By providing a nuanced understanding of damage, this study enhances the ability to offer precise guidance to disaster reconnaissance teams, facilitating more effective and targeted response efforts in the aftermath of earthquakes.
null
https://arxiv.org/abs/2507.02781v1
https://arxiv.org/pdf/2507.02781v1.pdf
null
[ "Danrong Zhang", "Huili Huang", "N. Simrill Smith", "Nimisha Roy", "J. David Frost" ]
[ "Depth Estimation", "Semantic Segmentation" ]
2025-07-03T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!", "full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "source_title": "Gaussian Error Linear Units (GELUs)", "source_url": "https://arxiv.org/abs/1606.08415v5" }, { "code_snippet_url": "", "description": "**Mix-FFN** is a feedforward layer used in the [SegFormer](https://paperswithcode.com/method/segformer) architecture. [ViT](https://www.paperswithcode.com/method/vision-transformer) uses [positional encoding](https://paperswithcode.com/methods/category/position-embeddings) (PE) to introduce the location information. However, the resolution of $\\mathrm{PE}$ is fixed. Therefore, when the test resolution is different from the training one, the positional code needs to be interpolated and this often leads to dropped accuracy. To alleviate this problem, [CPVT](https://www.paperswithcode.com/method/cpvt) uses $3 \\times 3$ Conv together with the PE to implement a data-driven PE. The authors of Mix-FFN argue that positional encoding is actually not necessary for semantic segmentation. Instead, they use Mix-FFN which considers the effect of zero padding to leak location information, by directly using a $3 \\times 3$ Conv in the feed-forward network (FFN). Mix-FFN can be formulated as:\r\n\r\n$$\r\n\\mathbf{x}\\_{\\text {out }}=\\operatorname{MLP}\\left(\\operatorname{GELU}\\left(\\operatorname{Conv}\\_{3 \\times 3}\\left(\\operatorname{MLP}\\left(\\mathbf{x}\\_{i n}\\right)\\right)\\right)\\right)+\\mathbf{x}\\_{i n}\r\n$$\r\n\r\nwhere $\\mathbf{x}\\_{i n}$ is the feature from a self-attention module. Mix-FFN mixes a $3 \\times 3$ convolution and an MLP into each FFN.", "full_name": "Mix-FFN", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Mix-FFN", "source_title": "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers", "source_url": "https://arxiv.org/abs/2105.15203v3" }, { "code_snippet_url": "", "description": "**SegFormer** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based framework for semantic segmentation that unifies Transformers with lightweight [multilayer perceptron](https://paperswithcode.com/method/feedforward-network) (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations.", "full_name": "SegFormer", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Semantic Segmentation Models** are a class of methods that address the task of semantically segmenting an image into different object classes. Below you can find a continuously updating list of semantic segmentation models. ", "name": "Semantic Segmentation Models", "parent": null }, "name": "SegFormer", "source_title": "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers", "source_url": "https://arxiv.org/abs/2105.15203v3" } ]
https://paperswithcode.com/paper/fairness-is-not-enough-auditing-competence
2507.11548
null
null
Fairness Is Not Enough: Auditing Competence and Intersectional Bias in AI-powered Resume Screening
The increasing use of generative AI for resume screening is predicated on the assumption that it offers an unbiased alternative to biased human decision-making. However, this belief fails to address a critical question: are these AI systems fundamentally competent at the evaluative tasks they are meant to perform? This study investigates the question of competence through a two-part audit of eight major AI platforms. Experiment 1 confirmed complex, contextual racial and gender biases, with some models penalizing candidates merely for the presence of demographic signals. Experiment 2, which evaluated core competence, provided a critical insight: some models that appeared unbiased were, in fact, incapable of performing a substantive evaluation, relying instead on superficial keyword matching. This paper introduces the "Illusion of Neutrality" to describe this phenomenon, where an apparent lack of bias is merely a symptom of a model's inability to make meaningful judgments. This study recommends that organizations and regulators adopt a dual-validation framework, auditing AI hiring tools for both demographic bias and demonstrable competence to ensure they are both equitable and effective.
The increasing use of generative AI for resume screening is predicated on the assumption that it offers an unbiased alternative to biased human decision-making.
https://arxiv.org/abs/2507.11548v1
https://arxiv.org/pdf/2507.11548v1.pdf
null
[ "Kevin T Webster" ]
[ "Fairness" ]
2025-07-11T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Please enter a description about the method here", "full_name": "ADaptive gradient method with the OPTimal convergence rate", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "ADOPT", "source_title": "ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate", "source_url": "https://arxiv.org/abs/2411.02853v3" } ]
https://paperswithcode.com/paper/developing-visual-augmented-q-a-system-using
2507.12378
null
null
Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker
Traditional information extraction systems face challenges with text only language models as it does not consider infographics (visual elements of information) such as tables, charts, images etc. often used to convey complex information to readers. Multimodal LLM (MLLM) face challenges of finding needle in the haystack problem i.e., either longer context length or substantial number of documents as search space. Late interaction mechanism over visual language models has shown state of the art performance in retrieval-based vision augmented Q&A tasks. There are yet few challenges using it for RAG based multi-modal Q&A. Firstly, many popular and widely adopted vector databases do not support native multi-vector retrieval. Secondly, late interaction requires computation which inflates space footprint and can hinder enterprise adoption. Lastly, the current state of late interaction mechanism does not leverage the approximate neighbor search indexing methods for large speed ups in retrieval process. This paper explores a pragmatic approach to make vision retrieval process scalable and efficient without compromising on performance quality. We propose multi-step custom implementation utilizing widely adopted hybrid search (metadata & embedding) and state of the art late interaction re-ranker to retrieve best matching pages. Finally, MLLM are prompted as reader to generate answers from contextualized best matching pages. Through experiments, we observe that the proposed design is scalable (significant speed up) and stable (without degrading performance quality), hence can be used as production systems at enterprises.
We propose multi-step custom implementation utilizing widely adopted hybrid search (metadata & embedding) and state of the art late interaction re-ranker to retrieve best matching pages.
https://arxiv.org/abs/2507.12378v1
https://arxiv.org/pdf/2507.12378v1.pdf
null
[ "Rachna Saxena", "Abhijeet Kumar", "Suresh Shanmugam" ]
[ "RAG", "Retrieval" ]
2025-07-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/google-research/bert", "description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.", "full_name": "BERT", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "BERT", "source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "source_url": "https://arxiv.org/abs/1810.04805v2" }, { "code_snippet_url": null, "description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).", "full_name": "BART", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "BART", "source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "source_url": "https://arxiv.org/abs/1910.13461v1" }, { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.", "full_name": "RAG", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "RAG", "source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "source_url": "https://arxiv.org/abs/2005.11401v4" } ]
https://paperswithcode.com/paper/choosing-the-better-bandit-algorithm-under
2507.11891
null
null
Choosing the Better Bandit Algorithm under Data Sharing: When Do A/B Experiments Work?
We study A/B experiments that are designed to compare the performance of two recommendation algorithms. Prior work has shown that the standard difference-in-means estimator is biased in estimating the global treatment effect (GTE) due to a particular form of interference between experimental units. Specifically, units under the treatment and control algorithms contribute to a shared pool of data that subsequently train both algorithms, resulting in interference between the two groups. The bias arising from this type of data sharing is known as "symbiosis bias". In this paper, we highlight that, for decision-making purposes, the sign of the GTE often matters more than its precise magnitude when selecting the better algorithm. We formalize this insight under a multi-armed bandit framework and theoretically characterize when the sign of the expected GTE estimate under data sharing aligns with or contradicts the sign of the true GTE. Our analysis identifies the level of exploration versus exploitation as a key determinant of how symbiosis bias impacts algorithm selection.
null
https://arxiv.org/abs/2507.11891v1
https://arxiv.org/pdf/2507.11891v1.pdf
null
[ "Shuangning Li", "Chonghuan Wang", "Jingyan Wang" ]
[]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/integrated-switched-capacitor-array-and
2507.12163
null
null
Integrated Switched Capacitor Array and Synchronous Charge Extraction with Adaptive Hybrid MPPT for Piezoelectric Harvesters
Energy Harvesting technologies will play a fundamental role in the development of the next generation of electronic systems as well as in advancing the development of sustainable infrastructure. One of the critical challenges in EH is utilizing ambient vibrations to harvest energy. Piezo Energy Harvesting, which uses ambient vibrations, is a promising technology in energy harvesting and a self-powered technology. However, it suffers from several practical challenges. Some of these challenges include narrow bandwidth, non-linearity, and impedance mismatch, among others. This paper presents a novel, simulated Piezo Energy Harvesting (PEH) framework that addresses some of these challenges. The proposed model is designed to be adaptive and effective against the inherent non-linearity of PEH. This detailed model covers a non-linear piezo, Synchronous Electric Charge Extraction (SECE), Hybrid Maximum Power Point Tracking (MPPT) and a Switched Capacitor Array (SCA). The SECE extracts the maximum charge accumulated on the piezo every time the piezo reaches the mechanical extremum. The Bouc-Wen model has been used to establish nonlinearity in the system. The hybrid MPPT exhibits significant improvement over conventional P&O, while the SCA-tuned system demonstrates resilience against variable frequency input.
null
https://arxiv.org/abs/2507.12163v1
https://arxiv.org/pdf/2507.12163v1.pdf
null
[ "Pramit Karmakar", "Siddharth B", "Chinmay Murlidhar Kadnur Rao" ]
[ "Point Tracking" ]
2025-07-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**Electric** is an energy-based cloze model for representation learning over text. Like BERT, it is a conditional generative model of tokens given their contexts. However, Electric does not use masking or output a full distribution over tokens that could occur in a context. Instead, it assigns a scalar energy score to each input token indicating how likely it is given its context.\r\n\r\nSpecifically, like BERT, Electric also models $p\\_{\\text {data }}\\left(x\\_{t} \\mid \\mathbf{x}\\_{\\backslash t}\\right)$, but does not use masking or a softmax layer. Electric first maps the unmasked input $\\mathbf{x}=\\left[x\\_{1}, \\ldots, x\\_{n}\\right]$ into contextualized vector representations $\\mathbf{h}(\\mathbf{x})=\\left[\\mathbf{h}\\_{1}, \\ldots, \\mathbf{h}\\_{n}\\right]$ using a transformer network. The model assigns a given position $t$ an energy score\r\n\r\n$$\r\nE(\\mathbf{x})\\_{t}=\\mathbf{w}^{T} \\mathbf{h}(\\mathbf{x})\\_{t}\r\n$$\r\n\r\nusing a learned weight vector $w$. The energy function defines a distribution over the possible tokens at position $t$ as\r\n\r\n$$\r\np\\_{\\theta}\\left(x\\_{t} \\mid \\mathbf{x}_{\\backslash t}\\right)=\\exp \\left(-E(\\mathbf{x})\\_{t}\\right) / Z\\left(\\mathbf{x}\\_{\\backslash t}\\right) \r\n$$\r\n\r\n$$\r\n=\\frac{\\exp \\left(-E(\\mathbf{x})\\_{t}\\right)}{\\sum\\_{x^{\\prime} \\in \\mathcal{V}} \\exp \\left(-E\\left(\\operatorname{REPLACE}\\left(\\mathbf{x}, t, x^{\\prime}\\right)\\right)\\_{t}\\right)}\r\n$$\r\n\r\nwhere $\\text{REPLACE}\\left(\\mathbf{x}, t, x^{\\prime}\\right)$ denotes replacing the token at position $t$ with $x^{\\prime}$ and $\\mathcal{V}$ is the vocabulary, in practice usually word pieces. Unlike with BERT, which produces the probabilities for all possible tokens $x^{\\prime}$ using a softmax layer, a candidate $x^{\\prime}$ is passed in as input to the transformer. As a result, computing $p_{\\theta}$ is prohibitively expensive because the partition function $Z\\_{\\theta}\\left(\\mathbf{x}\\_{\\backslash t}\\right)$ requires running the transformer $|\\mathcal{V}|$ times; unlike most EBMs, the intractability of $Z\\_{\\theta}(\\mathbf{x} \\backslash t)$ is more due to the expensive scoring function rather than having a large sample space.", "full_name": "Electric", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "Electric", "source_title": "Pre-Training Transformers as Energy-Based Cloze Models", "source_url": "https://arxiv.org/abs/2012.08561v1" } ]
https://paperswithcode.com/paper/afpm-alignment-based-frame-patch-modeling-for
2507.11911
null
null
AFPM: Alignment-based Frame Patch Modeling for Cross-Dataset EEG Decoding
Electroencephalogram (EEG) decoding models for brain-computer interfaces (BCIs) struggle with cross-dataset learning and generalization due to channel layout inconsistencies, non-stationary signal distributions, and limited neurophysiological prior integration. To address these issues, we propose a plug-and-play Alignment-Based Frame-Patch Modeling (AFPM) framework, which has two main components: 1) Spatial Alignment, which selects task-relevant channels based on brain-region priors, aligns EEG distributions across domains, and remaps the selected channels to a unified layout; and, 2) Frame-Patch Encoding, which models multi-dataset signals into unified spatiotemporal patches for EEG decoding. Compared to 17 state-of-the-art approaches that need dataset-specific tuning, the proposed calibration-free AFPM achieves performance gains of up to 4.40% on motor imagery and 3.58% on event-related potential tasks. To our knowledge, this is the first calibration-free cross-dataset EEG decoding framework, substantially enhancing the practicalness of BCIs in real-world applications.
null
https://arxiv.org/abs/2507.11911v1
https://arxiv.org/pdf/2507.11911v1.pdf
null
[ "Xiaoqing Chen", "Siyang Li", "Dongrui Wu" ]
[ "EEG", "Eeg Decoding", "Electroencephalogram (EEG)", "Motor Imagery" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/physx-physical-grounded-3d-asset-generation
2507.12465
null
null
PhysX: Physical-Grounded 3D Asset Generation
3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose \textbf{PhysX}, an end-to-end paradigm for physical-grounded 3D asset generation. 1) To bridge the critical gap in physics-annotated 3D datasets, we present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions: absolute scale, material, affordance, kinematics, and function description. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets.2) Furthermore, we propose \textbf{PhysXGen}, a feed-forward framework for physics-grounded image-to-3D asset generation, injecting physical knowledge into the pre-trained 3D structural space. Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.
3D modeling is moving from virtual to physical.
https://arxiv.org/abs/2507.12465v1
https://arxiv.org/pdf/2507.12465v1.pdf
null
[ "Ziang Cao", "Zhaoxi Chen", "Linag Pan", "Ziwei Liu" ]
[ "3D Generation", "Image to 3D" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/catching-bid-rigging-cartels-with-graph
2507.12369
null
null
Catching Bid-rigging Cartels with Graph Attention Neural Networks
We propose a novel application of graph attention networks (GATs), a type of graph neural network enhanced with attention mechanisms, to develop a deep learning algorithm for detecting collusive behavior, leveraging predictive features suggested in prior research. We test our approach on a large dataset covering 13 markets across seven countries. Our results show that predictive models based on GATs, trained on a subset of the markets, can be effectively transferred to other markets, achieving accuracy rates between 80\% and 90\%, depending on the hyperparameter settings. The best-performing configuration, applied to eight markets from Switzerland and the Japanese region of Okinawa, yields an average accuracy of 91% for cross-market prediction. When extended to 12 markets, the method maintains a strong performance with an average accuracy of 84\%, surpassing traditional ensemble approaches in machine learning. These results suggest that GAT-based detection methods offer a promising tool for competition authorities to screen markets for potential cartel activity.
null
https://arxiv.org/abs/2507.12369v1
https://arxiv.org/pdf/2507.12369v1.pdf
null
[ "David Imhof", "Emanuel W Viklund", "Martin Huber" ]
[ "Graph Attention", "Graph Neural Network" ]
2025-07-16T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/ai-wizards-at-checkthat-2025-enhancing
2507.11764
null
null
AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles
This paper presents AI Wizards' participation in the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles, classifying sentences as subjective/objective in monolingual, multilingual, and zero-shot settings. Training/development datasets were provided for Arabic, German, English, Italian, and Bulgarian; final evaluation included additional unseen languages (e.g., Greek, Romanian, Polish, Ukrainian) to assess generalization. Our primary strategy enhanced transformer-based classifiers by integrating sentiment scores, derived from an auxiliary model, with sentence representations, aiming to improve upon standard fine-tuning. We explored this sentiment-augmented architecture with mDeBERTaV3-base, ModernBERT-base (English), and Llama3.2-1B. To address class imbalance, prevalent across languages, we employed decision threshold calibration optimized on the development set. Our experiments show sentiment feature integration significantly boosts performance, especially subjective F1 score. This framework led to high rankings, notably 1st for Greek (Macro F1 = 0.51).
This paper presents AI Wizards' participation in the CLEF 2025 CheckThat!
https://arxiv.org/abs/2507.11764v1
https://arxiv.org/pdf/2507.11764v1.pdf
null
[ "Matteo Fasulo", "Luca Babboni", "Luca Tedeschini" ]
[ "Articles", "Sentence", "Sentiment Analysis", "Subjectivity Analysis" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How do I file a dispute with Expedia?\r\nTo file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.\r\nHow do I file a dispute with Expedia?\r\nTo file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.\r\nHow do I file a dispute with Expedia?\r\nTo file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.\r\n\r\nHow do I file a dispute with Expedia?\r\nTo file a dispute with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056), or use their Help Center to submit your case with complete booking information. When addressing your issue, ask about special discount offers—Expedia may provide travel vouchers, promo codes, or exclusive deals to resolve the dispute and retain customer satisfaction.", "full_name": "How do I file a dispute with Expedia?*DisputeFastService", "introduced_year": 2000, "main_collection": { "area": "General", "description": "If you're looking to get in touch with American Airlines fast, ☎️+1-801-(855)-(5905)or +1-804-853-9001✅ there are\r\nseveral efficient ways to reach their customer service team. The quickest method is to dial ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. American’s phone service ensures that you can speak with a live\r\nrepresentative promptly to resolve any issues or queries regarding your booking, reservation,\r\nor any changes, such as name corrections or ticket cancellations.", "name": "Attention Mechanisms", "parent": "Attention" }, "name": "How do I file a dispute with Expedia?*DisputeFastService", "source_title": "DeBERTa: Decoding-enhanced BERT with Disentangled Attention", "source_url": "https://arxiv.org/abs/2006.03654v6" }, { "code_snippet_url": "", "description": "**DeBERTa** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based neural language model that aims to improve the [BERT](https://paperswithcode.com/method/bert) and [RoBERTa](https://paperswithcode.com/method/roberta) models with two techniques: a [disentangled attention mechanism](https://paperswithcode.com/method/disentangled-attention-mechanism) and an enhanced mask decoder. The disentangled attention mechanism is where each word is represented unchanged using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangle matrices on their contents and relative positions. The enhanced mask decoder is used to replace the output [softmax](https://paperswithcode.com/method/softmax) layer to predict the masked tokens for model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve model’s generalization on downstream tasks.", "full_name": "DeBERTa", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "DeBERTa", "source_title": "DeBERTa: Decoding-enhanced BERT with Disentangled Attention", "source_url": "https://arxiv.org/abs/2006.03654v6" } ]
https://paperswithcode.com/paper/real-time-inverse-kinematics-for-generating
2507.00792
null
null
Real-Time Inverse Kinematics for Generating Multi-Constrained Movements of Virtual Human Characters
Generating accurate and realistic virtual human movements in real-time is of high importance for a variety of applications in computer graphics, interactive virtual environments, robotics, and biomechanics. This paper introduces a novel real-time inverse kinematics (IK) solver specifically designed for realistic human-like movement generation. Leveraging the automatic differentiation and just-in-time compilation of TensorFlow, the proposed solver efficiently handles complex articulated human skeletons with high degrees of freedom. By treating forward and inverse kinematics as differentiable operations, our method effectively addresses common challenges such as error accumulation and complicated joint limits in multi-constrained problems, which are critical for realistic human motion modeling. We demonstrate the solver's effectiveness on the SMPLX human skeleton model, evaluating its performance against widely used iterative-based IK algorithms, like Cyclic Coordinate Descent (CCD), FABRIK, and the nonlinear optimization algorithm IPOPT. Our experiments cover both simple end-effector tasks and sophisticated, multi-constrained problems with realistic joint limits. Results indicate that our IK solver achieves real-time performance, exhibiting rapid convergence, minimal computational overhead per iteration, and improved success rates compared to existing methods. The project code is available at https://github.com/hvoss-techfak/TF-JAX-IK
Generating accurate and realistic virtual human movements in real-time is of high importance for a variety of applications in computer graphics, interactive virtual environments, robotics, and biomechanics.
https://arxiv.org/abs/2507.00792v1
https://arxiv.org/pdf/2507.00792v1.pdf
null
[ "Hendric Voss", "Stefan Kopp" ]
[]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/reasoning-or-memorization-unreliable-results
2507.10532
null
null
Reasoning or Memorization? Unreliable Results of Reinforcement Learning Due to Data Contamination
The reasoning capabilities of large language models (LLMs) have been a longstanding focus of research. Recent works have further enhanced these capabilities using reinforcement learning (RL), with many new methods claiming significant improvements with minimal or no external supervision. Surprisingly, some studies even suggest that random or incorrect reward signals can enhance reasoning performance. However, these breakthroughs are mostly reported on the Qwen2.5 model family and evaluated on well-known benchmarks such as MATH-500, AMC, and AIME, while failing to achieve similar gains on other models like Llama, which warrants further investigation. Our analysis shows that although Qwen2.5 achieves strong mathematical reasoning performance, its pretraining on large-scale web corpora makes it vulnerable to data contamination in popular benchmarks. As a result, results derived from these benchmarks may be unreliable. To address this, we introduce a generator that produces fully synthetic arithmetic problems of arbitrary length and difficulty, yielding a clean dataset we call RandomCalculation. Using these leakage-free datasets, we show that only accurate reward signals consistently improve performance, while noisy or incorrect signals do not. We advocate for evaluating RL methods on uncontaminated benchmarks and across diverse model families to ensure trustworthy conclusions.
The reasoning capabilities of large language models (LLMs) have been a longstanding focus of research.
https://arxiv.org/abs/2507.10532v1
https://arxiv.org/pdf/2507.10532v1.pdf
null
[ "Mingqi Wu", "Zhihao Zhang", "Qiaole Dong", "Zhiheng Xi", "Jun Zhao", "Senjie Jin", "Xiaoran Fan", "Yuhao Zhou", "Yanwei Fu", "Qin Liu", "Songyang Zhang", "Qi Zhang" ]
[ "Math", "Mathematical Reasoning", "Memorization", "Reinforcement Learning (RL)" ]
2025-07-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/aligning-information-capacity-between-vision
2503.14953
null
null
Aligning Information Capacity Between Vision and Language via Dense-to-Sparse Feature Distillation for Image-Text Matching
Enabling Visual Semantic Models to effectively handle multi-view description matching has been a longstanding challenge. Existing methods typically learn a set of embeddings to find the optimal match for each view's text and compute similarity. However, the visual and text embeddings learned through these approaches have limited information capacity and are prone to interference from locally similar negative samples. To address this issue, we argue that the information capacity of embeddings is crucial and propose Dense-to-Sparse Feature Distilled Visual Semantic Embedding (D2S-VSE), which enhances the information capacity of sparse text by leveraging dense text distillation. Specifically, D2S-VSE is a two-stage framework. In the pre-training stage, we align images with dense text to enhance the information capacity of visual semantic embeddings. In the fine-tuning stage, we optimize two tasks simultaneously, distilling dense text embeddings to sparse text embeddings while aligning images and sparse texts, enhancing the information capacity of sparse text embeddings. Our proposed D2S-VSE model is extensively evaluated on the large-scale MS-COCO and Flickr30K datasets, demonstrating its superiority over recent state-of-the-art methods.
Enabling Visual Semantic Models to effectively handle multi-view description matching has been a longstanding challenge.
https://arxiv.org/abs/2503.14953v1
https://arxiv.org/pdf/2503.14953v1.pdf
null
[ "Yang Liu", "Wentao Feng", "Zhuoyao Liu", "Shudong Huang", "Jiancheng Lv" ]
[ "Image-text matching", "Text Matching" ]
2025-03-19T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" }, { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/a-hybrid-machine-learning-framework-for
2507.08832
null
null
A Hybrid Machine Learning Framework for Optimizing Crop Selection via Agronomic and Economic Forecasting
Farmers in developing regions like Karnataka, India, face a dual challenge: navigating extreme market and climate volatility while being excluded from the digital revolution due to literacy barriers. This paper presents a novel decision support system that addresses both challenges through a unique synthesis of machine learning and human-computer interaction. We propose a hybrid recommendation engine that integrates two predictive models: a Random Forest classifier to assess agronomic suitability based on soil, climate, and real-time weather data, and a Long Short-Term Memory (LSTM) network to forecast market prices for agronomically viable crops. This integrated approach shifts the paradigm from "what can grow?" to "what is most profitable to grow?", providing a significant advantage in mitigating economic risk. The system is delivered through an end-to-end, voice-based interface in the local Kannada language, leveraging fine-tuned speech recognition and high-fidelity speech synthesis models to ensure accessibility for low-literacy users. Our results show that the Random Forest model achieves 98.5% accuracy in suitability prediction, while the LSTM model forecasts harvest-time prices with a low margin of error. By providing data-driven, economically optimized recommendations through an inclusive interface, this work offers a scalable and impactful solution to enhance the financial resilience of marginalized farming communities.
null
https://arxiv.org/abs/2507.08832v1
https://arxiv.org/pdf/2507.08832v1.pdf
null
[ "Niranjan Mallikarjun Sindhur", "Pavithra C", "Nivya Muchikel" ]
[ "Hybrid Machine Learning", "speech-recognition", "Speech Recognition", "Speech Synthesis" ]
2025-07-06T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "An **LSTM** is a type of [recurrent neural network](https://paperswithcode.com/methods/category/recurrent-neural-networks) that addresses the vanishing gradient problem in vanilla RNNs through additional cells, input and output gates. Intuitively, vanishing gradients are solved through additional *additive* components, and forget gate activations, that allow the gradients to flow through the network without vanishing as quickly.\r\n\r\n(Image Source [here](https://medium.com/datadriveninvestor/how-do-lstm-networks-solve-the-problem-of-vanishing-gradients-a6784971a577))\r\n\r\n(Introduced by Hochreiter and Schmidhuber)", "full_name": "Long Short-Term Memory", "introduced_year": 1997, "main_collection": { "area": "Sequential", "description": "", "name": "Recurrent Neural Networks", "parent": null }, "name": "LSTM", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/recurrent-u-net-based-graph-neural-network
2507.11547
null
null
Recurrent U-Net-Based Graph Neural Network (RUGNN) for Accurate Deformation Predictions in Sheet Material Forming
In recent years, various artificial intelligence-based surrogate models have been proposed to provide rapid manufacturability predictions of material forming processes. However, traditional AI-based surrogate models, typically built with scalar or image-based neural networks, are limited in their ability to capture complex 3D spatial relationships and to operate in a permutation-invariant manner. To overcome these issues, emerging graph-based surrogate models are developed using graph neural networks. This study developed a new graph neural network surrogate model named Recurrent U Net-based Graph Neural Network (RUGNN). The RUGNN model can achieve accurate predictions of sheet material deformation fields across multiple forming timesteps. The RUGNN model incorporates Gated Recurrent Units (GRUs) to model temporal dynamics and a U-Net inspired graph-based downsample/upsample mechanism to handle spatial long-range dependencies. A novel 'node-to-surface' contact representation method was proposed, offering significant improvements in computational efficiency for large-scale contact interactions. The RUGNN model was validated using a cold forming case study and a more complex hot forming case study using aluminium alloys. Results demonstrate that the RUGNN model provides accurate deformation predictions closely matching ground truth FE simulations and outperforming several baseline GNN architectures. Model tuning was also performed to identify suitable hyperparameters, training strategies, and input feature representations. These results demonstrate that RUGNN is a reliable approach to support sheet material forming design by enabling accurate manufacturability predictions.
null
https://arxiv.org/abs/2507.11547v1
https://arxiv.org/pdf/2507.11547v1.pdf
null
[ "Yingxue Zhao", "Qianyi Chen", "Haoran Li", "Haosu Zhou", "Hamid Reza Attar", "Tobias Pfaff", "Tailin Wu", "Nan Li" ]
[ "Computational Efficiency", "Graph Neural Network" ]
2025-07-10T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "Graph Neural Network", "source_title": "Graph Neural Networks: A Review of Methods and Applications", "source_url": "https://arxiv.org/abs/1812.08434v6" } ]
https://paperswithcode.com/paper/the-ethical-implications-of-ai-in-creative
2507.05549
null
null
The Ethical Implications of AI in Creative Industries: A Focus on AI-Generated Art
As Artificial Intelligence (AI) continues to grow daily, more exciting (and somewhat controversial) technology emerges every other day. As we see the advancements in AI, we see more and more people becoming skeptical of it. This paper explores the complications and confusion around the ethics of generative AI art. We delve deep into the ethical side of AI, specifically generative art. We step back from the excitement and observe the impossible conundrums that this impressive technology produces. Covering environmental consequences, celebrity representation, intellectual property, deep fakes, and artist displacement. Our research found that generative AI art is responsible for increased carbon emissions, spreading misinformation, copyright infringement, unlawful depiction, and job displacement. In light of this, we propose multiple possible solutions for these problems. We address each situation's history, cause, and consequences and offer different viewpoints. At the root of it all, though, the central theme is that generative AI Art needs to be correctly legislated and regulated.
null
https://arxiv.org/abs/2507.05549v1
https://arxiv.org/pdf/2507.05549v1.pdf
null
[ "Prerana Khatiwada", "Joshua Washington", "Tyler Walsh", "Ahmed Saif Hamed", "Lokesh Bhatta" ]
[ "Ethics", "Misinformation" ]
2025-07-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/weisfeiler-leman-in-the-bamboo-novel-amr-1
null
null
null
Weisfeiler-leman in the bamboo: Novel AMR graph metrics and a benchmark for AMR graph similarity.
Several metrics have been proposed for assessing the similarity of (abstract) meaning representations (AMRs), but little is known about how they relate to human similarity ratings. Moreover, the current metrics have complementary strengths and weaknesses: Some emphasize speed, while others make the alignment of graph structures explicit, at the price of a costly alignment step. In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses. Specifically, our new metrics are able to match contextualized substructures and induce n:m alignments between their nodes. Furthermore, we introduce a Benchmark for AMR Metrics based on Overt Objectives (Bamboo), the first benchmark to support empirical assessment of graph-based MR similarity metrics. Bamboo maximizes the interpretability of results by defining multiple overt objectives that range from sentence similarity objectives to stress tests that probe a metric’s robustness against meaning-altering and meaning- preserving graph transformations. We show the benefits of Bamboo by profiling previous metrics and our own metrics. Results indicate that our novel metrics may serve as a strong baseline for future work.
In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses.
https://aclanthology.org/2021.tacl-1.85/
https://aclanthology.org/2021.tacl-1.85.pdf
Transactions of the Association for Computational Linguistics 2022 1
[ "Juri Opitz", "Angel Daza", "and Anette Frank." ]
[ "AMR Graph Similarity", "Graph Similarity", "Sentence", "Sentence Similarity" ]
2022-01-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/anchor-and-broadcast-an-efficient-concept
null
null
null
Anchor and Broadcast: An Efficient Concept Alignment Approach for Evaluation of Semantic Graphs
In this paper, we present AnCast, an intuitive and efficient tool for evaluating graph-based meaning representations (MR). AnCast implements evaluation metrics that are well understood in the NLP community, and they include concept F1, unlabeled relation F1, labeled relation F1, and weighted relation F1. The efficiency of the tool comes from a novel anchor broadcast alignment algorithm that is not subject to the trappings of local maxima. We show through experimental results that the AnCast score is highly correlated with the widely used Smatch score, but its computation takes only about 40% the time.
In this paper, we present AnCast, an intuitive and efficient tool for evaluating graph-based meaning representations (MR).
https://aclanthology.org/2024.lrec-main.94/
https://aclanthology.org/2024.lrec-main.94.pdf
Joint International Conference on Computational Linguistics, Language Resources and Evaluation 2024 5
[ "Haibo Sun", "Nianwen Xue" ]
[ "AMR Graph Similarity", "Concept Alignment", "Relation" ]
2024-05-20T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/precision-spatio-temporal-feature-fusion-for
2507.11523
null
null
Precision Spatio-Temporal Feature Fusion for Robust Remote Sensing Change Detection
Remote sensing change detection is vital for monitoring environmental and urban transformations but faces challenges like manual feature extraction and sensitivity to noise. Traditional methods and early deep learning models, such as convolutional neural networks (CNNs), struggle to capture long-range dependencies and global context essential for accurate change detection in complex scenes. While Transformer-based models mitigate these issues, their computational complexity limits their applicability in high-resolution remote sensing. Building upon ChangeMamba architecture, which leverages state space models for efficient global context modeling, this paper proposes precision fusion blocks to capture channel-wise temporal variations and per-pixel differences for fine-grained change detection. An enhanced decoder pipeline, incorporating lightweight channel reduction mechanisms, preserves local details with minimal computational cost. Additionally, an optimized loss function combining Cross Entropy, Dice and Lovasz objectives addresses class imbalance and boosts Intersection-over-Union (IoU). Evaluations on SYSU-CD, LEVIR-CD+, and WHU-CD datasets demonstrate superior precision, recall, F1 score, IoU, and overall accuracy compared to state-of-the-art methods, highlighting the approach's robustness for remote sensing change detection. For complete transparency, the codes and pretrained models are accessible at https://github.com/Buddhi19/MambaCD.git
Remote sensing change detection is vital for monitoring environmental and urban transformations but faces challenges like manual feature extraction and sensitivity to noise.
https://arxiv.org/abs/2507.11523v1
https://arxiv.org/pdf/2507.11523v1.pdf
null
[ "Buddhi Wijenayake", "Athulya Ratnayake", "Praveen Sumanasekara", "Nichula Wasalathilaka", "Mathivathanan Piratheepan", "Roshan Godaliyadda", "Mervyn Ekanayake", "Vijitha Herath" ]
[ "Change Detection", "State Space Models" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/wildfx-a-daw-powered-pipeline-for-in-the-wild
2507.10534
null
null
WildFX: A DAW-Powered Pipeline for In-the-Wild Audio FX Graph Modeling
Despite rapid progress in end-to-end AI music generation, AI-driven modeling of professional Digital Signal Processing (DSP) workflows remains challenging. In particular, while there is growing interest in neural black-box modeling of audio effect graphs (e.g. reverb, compression, equalization), AI-based approaches struggle to replicate the nuanced signal flow and parameter interactions used in professional workflows. Existing differentiable plugin approaches often diverge from real-world tools, exhibiting inferior performance relative to simplified neural controllers under equivalent computational constraints. We introduce WildFX, a pipeline containerized with Docker for generating multi-track audio mixing datasets with rich effect graphs, powered by a professional Digital Audio Workstation (DAW) backend. WildFX supports seamless integration of cross-platform commercial plugins or any plugins in the wild, in VST/VST3/LV2/CLAP formats, enabling structural complexity (e.g., sidechains, crossovers) and achieving efficient parallelized processing. A minimalist metadata interface simplifies project/plugin configuration. Experiments demonstrate the pipeline's validity through blind estimation of mixing graphs, plugin/gain parameters, and its ability to bridge AI research with practical DSP demands. The code is available on: https://github.com/IsaacYQH/WildFX.
Despite rapid progress in end-to-end AI music generation, AI-driven modeling of professional Digital Signal Processing (DSP) workflows remains challenging.
https://arxiv.org/abs/2507.10534v1
https://arxiv.org/pdf/2507.10534v1.pdf
null
[ "Qihui Yang", "Taylor Berg-Kirkpatrick", "Julian McAuley", "Zachary Novack" ]
[ "Music Generation" ]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/simplifications-are-absolutists-how
2507.11981
null
null
Simplifications are Absolutists: How Simplified Language Reduces Word Sense Awareness in LLM-Generated Definitions
Large Language Models (LLMs) can provide accurate word definitions and explanations for any context. However, the scope of the definition changes for different target groups, like children or language learners. This is especially relevant for homonyms, words with multiple meanings, where oversimplification might risk information loss by omitting key senses, potentially misleading users who trust LLM outputs. We investigate how simplification impacts homonym definition quality across three target groups: Normal, Simple, and ELI5. Using two novel evaluation datasets spanning multiple languages, we test DeepSeek v3, Llama 4 Maverick, Qwen3-30B A3B, GPT-4o mini, and Llama 3.1 8B via LLM-as-Judge and human annotations. Our results show that simplification drastically degrades definition completeness by neglecting polysemy, increasing the risk of misunderstanding. Fine-tuning Llama 3.1 8B with Direct Preference Optimization substantially improves homonym response quality across all prompt types. These findings highlight the need to balance simplicity and completeness in educational NLP to ensure reliable, context-aware definitions for all learners.
null
https://arxiv.org/abs/2507.11981v1
https://arxiv.org/pdf/2507.11981v1.pdf
null
[ "Lukas Ellinger", "Miriam Anschütz", "Georg Groh" ]
[]
2025-07-16T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.", "full_name": "LLaMA", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "LLaMA", "source_title": "LLaMA: Open and Efficient Foundation Language Models", "source_url": "https://arxiv.org/abs/2302.13971v1" } ]
https://paperswithcode.com/paper/spatialtrackerv2-3d-point-tracking-made-easy-1
2507.12462
null
null
SpatialTrackerV2: 3D Point Tracking Made Easy
We present SpatialTrackerV2, a feed-forward 3D point tracking method for monocular videos. Going beyond modular pipelines built on off-the-shelf components for 3D tracking, our approach unifies the intrinsic connections between point tracking, monocular depth, and camera pose estimation into a high-performing and feedforward 3D point tracker. It decomposes world-space 3D motion into scene geometry, camera ego-motion, and pixel-wise object motion, with a fully differentiable and end-to-end architecture, allowing scalable training across a wide range of datasets, including synthetic sequences, posed RGB-D videos, and unlabeled in-the-wild footage. By learning geometry and motion jointly from such heterogeneous data, SpatialTrackerV2 outperforms existing 3D tracking methods by 30%, and matches the accuracy of leading dynamic 3D reconstruction approaches while running 50$\times$ faster.
null
https://arxiv.org/abs/2507.12462v1
https://arxiv.org/pdf/2507.12462v1.pdf
null
[ "Yuxi Xiao", "Jianyuan Wang", "Nan Xue", "Nikita Karaev", "Yuri Makarov", "Bingyi Kang", "Xing Zhu", "Hujun Bao", "Yujun Shen", "Xiaowei Zhou" ]
[ "3D Reconstruction", "Camera Pose Estimation", "Point Tracking", "Pose Estimation" ]
2025-07-16T00:00:00
https://arxiv.org/abs/2507.12462
https://arxiv.org/pdf/2507.12462.pdf
spatialtrackerv2-3d-point-tracking-made-easy
null
[]
https://paperswithcode.com/paper/tactile-tiny-active-learning-for-wearable
2505.01160
null
null
TActiLE: Tiny Active LEarning for wearable devices
Tiny Machine Learning (TinyML) algorithms have seen extensive use in recent years, enabling wearable devices to be not only connected but also genuinely intelligent by running machine learning (ML) computations directly on-device. Among such devices, smart glasses have particularly benefited from TinyML advancements. TinyML facilitates the on-device execution of the inference phase of ML algorithms on embedded and wearable devices, and more recently, it has expanded into On-device Learning (ODL), which allows both inference and learning phases to occur directly on the device. The application of ODL techniques to wearable devices is particularly compelling, as it enables the development of more personalized models that adapt based on the data of the user. However, one of the major challenges of ODL algorithms is the scarcity of labeled data collected on-device. In smart wearable contexts, requiring users to manually label large amounts of data is often impractical and could lead to user disengagement with the technology. To address this issue, this paper explores the application of Active Learning (AL) techniques, i.e., techniques that aim at minimizing the labeling effort, by actively selecting from a large quantity of unlabeled data only a small subset to be labeled and added to the training set of the algorithm. In particular, we propose TActiLE, a novel AL algorithm that selects from the stream of on-device sensor data the ones that would help the ML algorithm improve the most once coupled with labels provided by the user. TActiLE is the first Active Learning technique specifically designed for the TinyML context. We evaluate its effectiveness and efficiency through experiments on multiple image classification datasets. The results demonstrate its suitability for tiny and wearable devices.
null
https://arxiv.org/abs/2505.01160v1
https://arxiv.org/pdf/2505.01160v1.pdf
null
[ "Massimo Pavan", "Claudio Galimberti", "Manuel Roveri" ]
[ "Active Learning", "image-classification", "Image Classification" ]
2025-05-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/can-llms-revolutionize-the-design-of
2504.09685
null
null
Can LLMs Revolutionize the Design of Explainable and Efficient TinyML Models?
This paper introduces a novel framework for designing efficient neural network architectures specifically tailored to tiny machine learning (TinyML) platforms. By leveraging large language models (LLMs) for neural architecture search (NAS), a vision transformer (ViT)-based knowledge distillation (KD) strategy, and an explainability module, the approach strikes an optimal balance between accuracy, computational efficiency, and memory usage. The LLM-guided search explores a hierarchical search space, refining candidate architectures through Pareto optimization based on accuracy, multiply-accumulate operations (MACs), and memory metrics. The best-performing architectures are further fine-tuned using logits-based KD with a pre-trained ViT-B/16 model, which enhances generalization without increasing model size. Evaluated on the CIFAR-100 dataset and deployed on an STM32H7 microcontroller (MCU), the three proposed models, LMaNet-Elite, LMaNet-Core, and QwNet-Core, achieve accuracy scores of 74.50%, 74.20% and 73.00%, respectively. All three models surpass current state-of-the-art (SOTA) models, such as MCUNet-in3/in4 (69.62% / 72.86%) and XiNet (72.27%), while maintaining a low computational cost of less than 100 million MACs and adhering to the stringent 320 KB static random-access memory (SRAM) constraint. These results demonstrate the efficiency and performance of the proposed framework for TinyML platforms, underscoring the potential of combining LLM-driven search, Pareto optimization, KD, and explainability to develop accurate, efficient, and interpretable models. This approach opens new possibilities in NAS, enabling the design of efficient architectures specifically suited for TinyML.
null
https://arxiv.org/abs/2504.09685v1
https://arxiv.org/pdf/2504.09685v1.pdf
null
[ "Christophe El Zeinaty", "Wassim Hamidouche", "Glenn Herrou", "Daniel Menard", "Merouane Debbah" ]
[ "Computational Efficiency", "Efficient Neural Network", "Knowledge Distillation", "Neural Architecture Search" ]
2025-04-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/on-sensor-convolutional-neural-networks-with
2503.16939
null
null
On-Sensor Convolutional Neural Networks with Early-Exits
Tiny Machine Learning (TinyML) is a novel research field aiming at integrating Machine Learning (ML) within embedded devices with limited memory, computation, and energy. Recently, a new branch of TinyML has emerged, focusing on integrating ML directly into the sensors to further reduce the power consumption of embedded devices. Interestingly, despite their state-of-the-art performance in many tasks, none of the current solutions in the literature aims to optimize the implementation of Convolutional Neural Networks (CNNs) operating directly into sensors. In this paper, we introduce for the first time in the literature the optimized design and implementation of Depth-First CNNs operating on the Intelligent Sensor Processing Unit (ISPU) within an Inertial Measurement Unit (IMU) by STMicroelectronics. Our approach partitions the CNN between the ISPU and the microcontroller (MCU) and employs an Early-Exit mechanism to stop the computations on the IMU when enough confidence about the results is achieved, hence significantly reducing power consumption. When using a NUCLEO-F411RE board, this solution achieved an average current consumption of 4.8 mA, marking an 11% reduction compared to the regular inference pipeline on the MCU, while having equal accuracy.
null
https://arxiv.org/abs/2503.16939v1
https://arxiv.org/pdf/2503.16939v1.pdf
null
[ "Hazem Hesham Yousef Shalby", "Arianna De Vecchi", "Alice Scandelli", "Pietro Bartoli", "Diana Trojaniello", "Manuel Roveri", "Federica Villa" ]
[]
2025-03-21T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ethereal-energy-efficient-and-high-throughput
2502.05640
null
null
ETHEREAL: Energy-efficient and High-throughput Inference using Compressed Tsetlin Machine
The Tsetlin Machine (TM) is a novel alternative to deep neural networks (DNNs). Unlike DNNs, which rely on multi-path arithmetic operations, a TM learns propositional logic patterns from data literals using Tsetlin automata. This fundamental shift from arithmetic to logic underpinning makes TM suitable for empowering new applications with low-cost implementations. In TM, literals are often included by both positive and negative clauses within the same class, canceling out their impact on individual class definitions. This property can be exploited to develop compressed TM models, enabling energy-efficient and high-throughput inferences for machine learning (ML) applications. We introduce a training approach that incorporates excluded automata states to sparsify TM logic patterns in both positive and negative clauses. This exclusion is iterative, ensuring that highly class-correlated (and therefore significant) literals are retained in the compressed inference model, ETHEREAL, to maintain strong classification accuracy. Compared to standard TMs, ETHEREAL TM models can reduce model size by up to 87.54%, with only a minor accuracy compromise. We validate the impact of this compression on eight real-world Tiny machine learning (TinyML) datasets against standard TM, equivalent Random Forest (RF) and Binarized Neural Network (BNN) on the STM32F746G-DISCO platform. Our results show that ETHEREAL TM models achieve over an order of magnitude reduction in inference time (resulting in higher throughput) and energy consumption compared to BNNs, while maintaining a significantly smaller memory footprint compared to RFs.
null
https://arxiv.org/abs/2502.05640v1
https://arxiv.org/pdf/2502.05640v1.pdf
null
[ "Shengyu Duan", "Rishad Shafik", "Alex Yakovlev" ]
[]
2025-02-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lattice-learning-to-efficiently-compress-the
2504.05646
null
null
Lattice: Learning to Efficiently Compress the Memory
Attention mechanisms have revolutionized sequence learning but suffer from quadratic computational complexity. This paper introduces Lattice, a novel recurrent neural network (RNN) mechanism that leverages the inherent low-rank structure of K-V matrices to efficiently compress the cache into a fixed number of memory slots, achieving sub-quadratic complexity. We formulate this compression as an online optimization problem and derive a dynamic memory update rule based on a single gradient descent step. The resulting recurrence features a state- and input-dependent gating mechanism, offering an interpretable memory update process. The core innovation is the orthogonal update: each memory slot is updated exclusively with information orthogonal to its current state hence incorporation of only novel, non-redundant data, which minimizes the interference with previously stored information. The experimental results show that Lattice achieves the best perplexity compared to all baselines across diverse context lengths, with performance improvement becoming more pronounced as the context length increases.
null
https://arxiv.org/abs/2504.05646v1
https://arxiv.org/pdf/2504.05646v1.pdf
null
[ "Mahdi Karami", "Vahab Mirrokni" ]
[]
2025-04-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/flexible-language-modeling-in-continuous
2507.00425
null
null
Flexible Language Modeling in Continuous Space with Transformer-based Autoregressive Flows
Autoregressive models have driven remarkable progress in language modeling. Their foundational reliance on discrete tokens, unidirectional context, and single-pass decoding, while central to their success, also inspires the exploration of a design space that could offer new axes of modeling flexibility. In this work, we explore an alternative paradigm, shifting language modeling from a discrete token space to a continuous latent space. We propose a novel framework TarFlowLM, that employs transformer-based autoregressive normalizing flows to model these continuous representations. This approach unlocks substantial flexibility, enabling the construction of models that can capture global bi-directional context through stacked, alternating-direction autoregressive transformations, support block-wise generation with flexible token patch sizes, and facilitate a hierarchical multi-pass generation process. We further propose new mixture-based coupling transformations designed to capture complex dependencies within the latent space shaped by discrete data, and demonstrate theoretical connections to conventional discrete autoregressive models. Extensive experiments on language modeling benchmarks demonstrate strong likelihood performance and highlight the flexible modeling capabilities inherent in our framework.
null
https://arxiv.org/abs/2507.00425v1
https://arxiv.org/pdf/2507.00425v1.pdf
null
[ "Ruixiang Zhang", "Shuangfei Zhai", "Jiatao Gu", "Yizhe Zhang", "Huangjie Zheng", "Tianrong Chen", "Miguel Angel Bautista", "Josh Susskind", "Navdeep Jaitly" ]
[ "Language Modeling", "Language Modelling" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mercury-ultra-fast-language-models-based-on
2506.17298
null
null
Mercury: Ultra-Fast Language Models Based on Diffusion
We present Mercury, a new generation of commercial-scale large language models (LLMs) based on diffusion. These models are parameterized via the Transformer architecture and trained to predict multiple tokens in parallel. In this report, we detail Mercury Coder, our first set of diffusion LLMs designed for coding applications. Currently, Mercury Coder comes in two sizes: Mini and Small. These models set a new state-of-the-art on the speed-quality frontier. Based on independent evaluations conducted by Artificial Analysis, Mercury Coder Mini and Mercury Coder Small achieve state-of-the-art throughputs of 1109 tokens/sec and 737 tokens/sec, respectively, on NVIDIA H100 GPUs and outperform speed-optimized frontier models by up to 10x on average while maintaining comparable quality. We discuss additional results on a variety of code benchmarks spanning multiple languages and use-cases as well as real-world validation by developers on Copilot Arena, where the model currently ranks second on quality and is the fastest model overall. We also release a public API at https://platform.inceptionlabs.ai/ and free playground at https://chat.inceptionlabs.ai
null
https://arxiv.org/abs/2506.17298v1
https://arxiv.org/pdf/2506.17298v1.pdf
null
[ "Inception Labs", "Samar Khanna", "Siddhant Kharbanda", "Shufan Li", "Harshit Varma", "Eric Wang", "Sawyer Birnbaum", "Ziyang Luo", "Yanis Miraoui", "Akash Palrecha", "Stefano Ermon", "Aditya Grover", "Volodymyr Kuleshov" ]
[]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/pyvision-agentic-vision-with-dynamic-tooling
2507.07998
null
null
PyVision: Agentic Vision with Dynamic Tooling
LLMs are increasingly deployed as agents, systems capable of planning, reasoning, and dynamically calling external tools. However, in visual reasoning, prior approaches largely remain limited by predefined workflows and static toolsets. In this report, we present PyVision, an interactive, multi-turn framework that enables MLLMs to autonomously generate, execute, and refine Python-based tools tailored to the task at hand, unlocking flexible and interpretable problem-solving. We develop a taxonomy of the tools created by PyVision and analyze their usage across a diverse set of benchmarks. Quantitatively, PyVision achieves consistent performance gains, boosting GPT-4.1 by +7.8% on V* and Claude-4.0-Sonnet by +31.1% on VLMsAreBlind-mini. These results point to a broader shift: dynamic tooling allows models not just to use tools, but to invent them, advancing toward more agentic visual reasoning.
null
https://arxiv.org/abs/2507.07998v2
https://arxiv.org/pdf/2507.07998v2.pdf
null
[ "Shitian Zhao", "Haoquan Zhang", "Shaoheng Lin", "Ming Li", "Qilong Wu", "Kaipeng Zhang", "Chen Wei" ]
[ "Visual Reasoning" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/thinking-with-generated-images
2505.22525
null
null
Thinking with Generated Images
We present Thinking with Generated Images, a novel paradigm that fundamentally transforms how large multimodal models (LMMs) engage with visual reasoning by enabling them to natively think across text and vision modalities through spontaneous generation of intermediate visual thinking steps. Current visual reasoning with LMMs is constrained to either processing fixed user-provided images or reasoning solely through text-based chain-of-thought (CoT). Thinking with Generated Images unlocks a new dimension of cognitive capability where models can actively construct intermediate visual thoughts, critique their own visual hypotheses, and refine them as integral components of their reasoning process. We demonstrate the effectiveness of our approach through two complementary mechanisms: (1) vision generation with intermediate visual subgoals, where models decompose complex visual tasks into manageable components that are generated and integrated progressively, and (2) vision generation with self-critique, where models generate an initial visual hypothesis, analyze its shortcomings through textual reasoning, and produce refined outputs based on their own critiques. Our experiments on vision generation benchmarks show substantial improvements over baseline approaches, with our models achieving up to 50% (from 38% to 57%) relative improvement in handling complex multi-object scenarios. From biochemists exploring novel protein structures, and architects iterating on spatial designs, to forensic analysts reconstructing crime scenes, and basketball players envisioning strategic plays, our approach enables AI models to engage in the kind of visual imagination and iterative refinement that characterizes human creative, analytical, and strategic thinking. We release our open-source suite at https://github.com/GAIR-NLP/thinking-with-generated-images.
null
https://arxiv.org/abs/2505.22525v1
https://arxiv.org/pdf/2505.22525v1.pdf
null
[ "Ethan Chern", "Zhulin Hu", "Steffi Chern", "Siqi Kou", "Jiadi Su", "Yan Ma", "Zhijie Deng", "PengFei Liu" ]
[ "Visual Reasoning" ]
2025-05-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/crosswordbench-evaluating-the-reasoning
2504.00043
null
null
CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
Existing reasoning evaluation frameworks for Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) predominantly either assess text-based reasoning or vision-language understanding capabilities, with limited dynamic interplay between textual and visual constraints. To address this limitation, we introduce CrossWordBench, a benchmark designed to evaluate the reasoning capabilities of both LLMs and LVLMs through the medium of crossword puzzles-a task requiring multimodal adherence to semantic constraints from text-based clues and intersectional constraints from visual grid structures. CrossWordBench leverages a controllable puzzle generation framework that produces puzzles in multiple formats (text and image) and offers different evaluation strategies ranging from direct puzzle solving to interactive modes. Our extensive evaluation of over 20 models reveals that reasoning LLMs outperform non-reasoning models substantially by effectively leveraging crossing-letter constraints. We further demonstrate that LVLMs struggle with the task, showing a strong correlation between their puzzle-solving performance and grid-parsing accuracy. Our findings offer insights into the limitations of the reasoning capabilities of current LLMs and LVLMs, and provide an effective approach for creating multimodal constrained tasks for future evaluations.
null
https://arxiv.org/abs/2504.00043v1
https://arxiv.org/pdf/2504.00043v1.pdf
null
[ "Jixuan Leng", "Chengsong Huang", "Langlin Huang", "Bill Yuchen Lin", "William W. Cohen", "Haohan Wang", "Jiaxin Huang" ]
[]
2025-03-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ocr-reasoning-benchmark-unveiling-the-true
2505.17163
null
null
OCR-Reasoning Benchmark: Unveiling the True Capabilities of MLLMs in Complex Text-Rich Image Reasoning
Recent advancements in multimodal slow-thinking systems have demonstrated remarkable performance across diverse visual reasoning tasks. However, their capabilities in text-rich image reasoning tasks remain understudied due to the lack of a systematic benchmark. To address this gap, we propose OCR-Reasoning, a comprehensive benchmark designed to systematically assess Multimodal Large Language Models on text-rich image reasoning tasks. The benchmark comprises 1,069 human-annotated examples spanning 6 core reasoning abilities and 18 practical reasoning tasks in text-rich visual scenarios. Furthermore, unlike other text-rich image understanding benchmarks that only annotate the final answers, OCR-Reasoning also annotates the reasoning process simultaneously. With the annotated reasoning process and the final answers, OCR-Reasoning evaluates not only the final answers generated by models but also their reasoning processes, enabling a holistic analysis of their problem-solving abilities. Leveraging this benchmark, we conducted a comprehensive evaluation of state-of-the-art MLLMs. Our results demonstrate the limitations of existing methodologies. Notably, even state-of-the-art MLLMs exhibit substantial difficulties, with none achieving accuracy surpassing 50\% across OCR-Reasoning, indicating that the challenges of text-rich image reasoning are an urgent issue to be addressed. The benchmark and evaluation scripts are available at https://github.com/SCUT-DLVCLab/OCR-Reasoning.
null
https://arxiv.org/abs/2505.17163v1
https://arxiv.org/pdf/2505.17163v1.pdf
null
[ "Mingxin Huang", "Yongxin Shi", "Dezhi Peng", "Songxuan Lai", "Zecheng Xie", "Lianwen Jin" ]
[ "Optical Character Recognition (OCR)", "Visual Reasoning" ]
2025-05-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gemex-thinkvg-towards-thinking-with-visual
2506.17939
null
null
GEMeX-ThinkVG: Towards Thinking with Visual Grounding in Medical VQA via Reinforcement Learning
Medical visual question answering aims to support clinical decision-making by enabling models to answer natural language questions based on medical images. While recent advances in multi-modal learning have significantly improved performance, current methods still suffer from limited answer reliability and poor interpretability, impairing the ability of clinicians and patients to understand and trust model-generated answers. To address this, this work first proposes a Thinking with Visual Grounding (ThinkVG) dataset wherein the answer generation is decomposed into intermediate reasoning steps that explicitly ground relevant visual regions of the medical image, thereby providing fine-grained explainability. Furthermore, we introduce a novel verifiable reward mechanism for reinforcement learning to guide post-training, improving the alignment between the model's reasoning process and its final answer. Remarkably, our method achieves comparable performance using only one-eighth of the training data, demonstrating the efficiency and effectiveness of the proposal. The dataset is available at https://huggingface.co/datasets/BoKelvin/GEMeX-ThinkVG.
null
https://arxiv.org/abs/2506.17939v1
https://arxiv.org/pdf/2506.17939v1.pdf
null
[ "Bo Liu", "Xiangyu Zhao", "Along He", "Yidi Chen", "Huazhu Fu", "Xiao-Ming Wu" ]
[ "Answer Generation", "Decision Making", "Medical Visual Question Answering", "Question Answering", "Visual Grounding", "Visual Question Answering", "Visual Question Answering (VQA)" ]
2025-06-22T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/linear-attention-with-global-context-a-1
2507.02748
null
null
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Transformers have become the de facto standard for a wide range of tasks, from image classification to physics simulations. Despite their impressive performance, the quadratic complexity of standard Transformers in both memory and time with respect to the input length makes them impractical for processing high-resolution inputs. Therefore, several variants have been proposed, the most successful relying on patchification, downsampling, or coarsening techniques, often at the cost of losing the finest-scale details. In this work, we take a different approach. Inspired by state-of-the-art techniques in $n$-body numerical simulations, we cast attention as an interaction problem between grid points. We introduce the Multipole Attention Neural Operator (MANO), which computes attention in a distance-based multiscale fashion. MANO maintains, in each attention head, a global receptive field and achieves linear time and memory complexity with respect to the number of grid points. Empirical results on image classification and Darcy flows demonstrate that MANO rivals state-of-the-art models such as ViT and Swin Transformer, while reducing runtime and peak memory usage by orders of magnitude. We open source our code for reproducibility at https://github.com/AlexColagrande/MANO.
null
https://arxiv.org/abs/2507.02748v1
https://arxiv.org/pdf/2507.02748v1.pdf
null
[ "Alex Colagrande", "Paul Caillon", "Eva Feillet", "Alexandre Allauzen" ]
[ "image-classification", "Image Classification" ]
2025-07-03T00:00:00
https://arxiv.org/abs/2507.02748
https://arxiv.org/pdf/2507.02748
linear-attention-with-global-context-a
null
[]
https://paperswithcode.com/paper/dreamvla-a-vision-language-action-model-1
2507.04447
null
null
DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge
Recent advances in vision-language-action (VLA) models have shown promise in integrating image generation with action prediction to improve generalization and reasoning in robot manipulation. However, existing methods are limited to challenging image-based forecasting, which suffers from redundant information and lacks comprehensive and critical world knowledge, including dynamic, spatial and semantic information. To address these limitations, we propose DreamVLA, a novel VLA framework that integrates comprehensive world knowledge forecasting to enable inverse dynamics modeling, thereby establishing a perception-prediction-action loop for manipulation tasks. Specifically, DreamVLA introduces a dynamic-region-guided world knowledge prediction, integrated with the spatial and semantic cues, which provide compact yet comprehensive representations for action planning. This design aligns with how humans interact with the world by first forming abstract multimodal reasoning chains before acting. To mitigate interference among the dynamic, spatial and semantic information during training, we adopt a block-wise structured attention mechanism that masks their mutual attention, preventing information leakage and keeping each representation clean and disentangled. Moreover, to model the conditional distribution over future actions, we employ a diffusion-based transformer that disentangles action representations from shared latent features. Extensive experiments on both real-world and simulation environments demonstrate that DreamVLA achieves 76.7% success rate on real robot tasks and 4.44 average length on the CALVIN ABC-D benchmarks.
null
https://arxiv.org/abs/2507.04447v2
https://arxiv.org/pdf/2507.04447v2.pdf
null
[ "Wenyao Zhang", "Hongsi Liu", "Zekun Qi", "Yunnan Wang", "Xinqiang Yu", "Jiazhao Zhang", "Runpei Dong", "JiaWei He", "He Wang", "Zhizheng Zhang", "Li Yi", "Wenjun Zeng", "Xin Jin" ]
[ "Image Generation", "Multimodal Reasoning", "Robot Manipulation", "Vision-Language-Action", "World Knowledge" ]
2025-07-06T00:00:00
https://arxiv.org/abs/2507.04447
https://arxiv.org/pdf/2507.04447
dreamvla-a-vision-language-action-model
null
[]
https://paperswithcode.com/paper/bmfm-dna-a-snp-aware-dna-foundation-model-to
2507.05265
null
null
BMFM-DNA: A SNP-aware DNA foundation model to capture variant effects
Large language models (LLMs) trained on text demonstrated remarkable results on natural language processing (NLP) tasks. These models have been adapted to decipher the language of DNA, where sequences of nucleotides act as "words" that encode genomic functions. However, the genome differs fundamentally from natural language, as it lacks clearly defined words or a consistent grammar. Although DNA language models (DNALMs) such as DNABERT, GENA-LM have achieved high level of performance on genome-related biological tasks, these models do not encode biological functions in the presence of sequence variations. To address this problem, we pre-train foundation models that effectively integrate sequence variations, in particular Single Nucleotide Polymorphisms (SNPs), as they underlie important biological functions. Specifically, we use ModernBERT to pre-train two different Biomedical Foundation Models (BMFM), namely, BMFM-DNA-REF in which the model is trained with sequences of varying lengths along with their reverse complements derived from the reference genome and BMFM-DNA-SNP in which the model is trained with sequences created using a novel representation scheme that encodes sequence variations. Our findings indicate that integrating sequence variations into DNALMs helps capture the biological functions as seen in improvements on all fine-tuning tasks. To explore the model's practical utility, we experimented with various strategies for SNP imputation on promoter detection task introduced in DNABERT-2. However, we acknowledge that the current benchmarks are limited in their ability to fully evaluate these models. To enable more comprehensive assessment in the future and encourage community contributions, we release our models through HuggingFace and the code to reproduce the results at https://github.com/BiomedSciAI/biomed-multi-omic
null
https://arxiv.org/abs/2507.05265v1
https://arxiv.org/pdf/2507.05265v1.pdf
null
[ "Hongyang Li", "Sanjoy Dey", "Bum Chul Kwon", "Michael Danziger", "Michal Rosen-Tzvi", "Jianying Hu", "James Kozloski", "Ching-Huei Tsou", "Bharath Dandala", "Pablo Meyer" ]
[ "Imputation", "Promoter Detection" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/brainlesion-suite-a-flexible-and-user
2507.09036
null
null
BrainLesion Suite: A Flexible and User-Friendly Framework for Modular Brain Lesion Image Analysis
BrainLesion Suite is a versatile toolkit for building modular brain lesion image analysis pipelines in Python. Following Pythonic principles, BrainLesion Suite is designed to provide a 'brainless' development experience, minimizing cognitive effort and streamlining the creation of complex workflows for clinical and scientific practice. At its core is an adaptable preprocessing module that performs co-registration, atlas registration, and optional skull-stripping and defacing on arbitrary multi-modal input images. BrainLesion Suite leverages algorithms from the BraTS challenge to synthesize missing modalities, inpaint lesions, and generate pathology-specific tumor segmentations. BrainLesion Suite also enables quantifying segmentation model performance, with tools such as panoptica to compute lesion-wise metrics. Although BrainLesion Suite was originally developed for image analysis pipelines of brain lesions such as glioma, metastasis, and multiple sclerosis, it can be adapted for other biomedical image analysis applications. The individual BrainLesion Suite packages and tutorials are accessible on GitHub.
null
https://arxiv.org/abs/2507.09036v1
https://arxiv.org/pdf/2507.09036v1.pdf
null
[ "Florian Kofler", "Marcel Rosier", "Mehdi Astaraki", "Hendrik Möller", "Ilhem Isra Mekki", "Josef A. Buchner", "Anton Schmick", "Arianna Pfiffer", "Eva Oswald", "Lucas Zimmer", "Ezequiel de la Rosa", "Sarthak Pati", "Julian Canisius", "Arianna Piffer", "Ujjwal Baid", "Mahyar Valizadeh", "Akis Linardos", "Jan C. Peeken", "Surprosanna Shit", "Felix Steinbauer", "Daniel Rueckert", "Rolf Heckemann", "Spyridon Bakas", "Jan Kirschke", "Constantin von See", "Ivan Ezhov", "Marie Piraud", "Benedikt Wiestler", "Bjoern Menze" ]
[ "Skull Stripping" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/zipvoice-dialog-non-autoregressive-spoken
2507.09318
null
null
ZipVoice-Dialog: Non-Autoregressive Spoken Dialogue Generation with Flow Matching
Generating spoken dialogue is more challenging than monologue text-to-speech (TTS) due to the need for realistic turn-taking and distinct speaker timbres. Existing spoken dialogue generation models, being auto-regressive, suffer from slow and unstable inference. To overcome these limitations, we introduce ZipVoice-Dialog, a non-autoregressive zero-shot spoken dialogue generation model built upon flow matching. Key designs include: 1) speaker-turn embeddings for precise speaker turn-taking; 2) a curriculum learning strategy for stable speech-text alignment; 3) specialized strategies to enable stereo dialogue generation. Additionally, recognizing the lack of open-source large-scale spoken dialogue datasets, we curated OpenDialog, a 6.8k-hour spoken dialogue dataset from in-the-wild speech data. Furthermore, we established a benchmark to comprehensively evaluate various models. Experimental results demonstrate that ZipVoice-Dialog achieves superior performance in intelligibility, speaker turn-taking accuracy, speaker similarity, and inference speed. Our codes, model checkpoints, demo samples, and the OpenDialog dataset are all publicly available at https://github.com/k2-fsa/ZipVoice.
null
https://arxiv.org/abs/2507.09318v1
https://arxiv.org/pdf/2507.09318v1.pdf
null
[ "Han Zhu", "Wei Kang", "Liyong Guo", "Zengwei Yao", "Fangjun Kuang", "Weiji Zhuang", "Zhaoqing Li", "Zhifeng Han", "Dong Zhang", "Xin Zhang", "Xingchen Song", "Long Lin", "Daniel Povey" ]
[ "Dialogue Generation", "text-to-speech", "Text to Speech" ]
2025-07-12T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tlb-vfi-temporal-aware-latent-brownian-bridge
2507.04984
null
null
TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation
Video Frame Interpolation (VFI) aims to predict the intermediate frame $I_n$ (we use n to denote time in videos to avoid notation overload with the timestep $t$ in diffusion models) based on two consecutive neighboring frames $I_0$ and $I_1$. Recent approaches apply diffusion models (both image-based and video-based) in this task and achieve strong performance. However, image-based diffusion models are unable to extract temporal information and are relatively inefficient compared to non-diffusion methods. Video-based diffusion models can extract temporal information, but they are too large in terms of training scale, model size, and inference time. To mitigate the above issues, we propose Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation (TLB-VFI), an efficient video-based diffusion model. By extracting rich temporal information from video inputs through our proposed 3D-wavelet gating and temporal-aware autoencoder, our method achieves 20% improvement in FID on the most challenging datasets over recent SOTA of image-based diffusion models. Meanwhile, due to the existence of rich temporal information, our method achieves strong performance while having 3times fewer parameters. Such a parameter reduction results in 2.3x speed up. By incorporating optical flow guidance, our method requires 9000x less training data and achieves over 20x fewer parameters than video-based diffusion models. Codes and results are available at our project page: https://zonglinl.github.io/tlbvfi_page.
null
https://arxiv.org/abs/2507.04984v1
https://arxiv.org/pdf/2507.04984v1.pdf
null
[ "Zonglin Lyu", "Chen Chen" ]
[ "Optical Flow Estimation", "Video Frame Interpolation" ]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/semantic-frame-interpolation
2507.05173
null
null
Semantic Frame Interpolation
Generating intermediate video content of varying lengths based on given first and last frames, along with text prompt information, offers significant research and application potential. However, traditional frame interpolation tasks primarily focus on scenarios with a small number of frames, no text control, and minimal differences between the first and last frames. Recent community developers have utilized large video models represented by Wan to endow frame-to-frame capabilities. However, these models can only generate a fixed number of frames and often fail to produce satisfactory results for certain frame lengths, while this setting lacks a clear official definition and a well-established benchmark. In this paper, we first propose a new practical Semantic Frame Interpolation (SFI) task from the perspective of academic definition, which covers the above two settings and supports inference at multiple frame rates. To achieve this goal, we propose a novel SemFi model building upon Wan2.1, which incorporates a Mixture-of-LoRA module to ensure the generation of high-consistency content that aligns with control conditions across various frame length limitations. Furthermore, we propose SFI-300K, the first general-purpose dataset and benchmark specifically designed for SFI. To support this, we collect and process data from the perspective of SFI, carefully designing evaluation metrics and methods to assess the model's performance across multiple dimensions, encompassing image and video, and various aspects, including consistency and diversity. Through extensive experiments on SFI-300K, we demonstrate that our method is particularly well-suited to meet the requirements of the SFI task.
null
https://arxiv.org/abs/2507.05173v1
https://arxiv.org/pdf/2507.05173v1.pdf
null
[ "Yijia Hong", "Jiangning Zhang", "Ran Yi", "Yuji Wang", "Weijian Cao", "Xiaobin Hu", "Zhucun Xue", "Yabiao Wang", "Chengjie Wang", "Lizhuang Ma" ]
[]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mirix-multi-agent-memory-system-for-llm-based
2507.07957
null
null
MIRIX: Multi-Agent Memory System for LLM-Based Agents
Although memory capabilities of AI agents are gaining increasing attention, existing solutions remain fundamentally limited. Most rely on flat, narrowly scoped memory components, constraining their ability to personalize, abstract, and reliably recall user-specific information over time. To this end, we introduce MIRIX, a modular, multi-agent memory system that redefines the future of AI memory by solving the field's most critical challenge: enabling language models to truly remember. Unlike prior approaches, MIRIX transcends text to embrace rich visual and multimodal experiences, making memory genuinely useful in real-world scenarios. MIRIX consists of six distinct, carefully structured memory types: Core, Episodic, Semantic, Procedural, Resource Memory, and Knowledge Vault, coupled with a multi-agent framework that dynamically controls and coordinates updates and retrieval. This design enables agents to persist, reason over, and accurately retrieve diverse, long-term user data at scale. We validate MIRIX in two demanding settings. First, on ScreenshotVQA, a challenging multimodal benchmark comprising nearly 20,000 high-resolution computer screenshots per sequence, requiring deep contextual understanding and where no existing memory systems can be applied, MIRIX achieves 35% higher accuracy than the RAG baseline while reducing storage requirements by 99.9%. Second, on LOCOMO, a long-form conversation benchmark with single-modal textual input, MIRIX attains state-of-the-art performance of 85.4%, far surpassing existing baselines. These results show that MIRIX sets a new performance standard for memory-augmented LLM agents. To allow users to experience our memory system, we provide a packaged application powered by MIRIX. It monitors the screen in real time, builds a personalized memory base, and offers intuitive visualization and secure local storage to ensure privacy.
null
https://arxiv.org/abs/2507.07957v1
https://arxiv.org/pdf/2507.07957v1.pdf
null
[ "Yu Wang", "Xi Chen" ]
[ "RAG" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evaluating-memory-in-llm-agents-via
2507.05257
null
null
Evaluating Memory in LLM Agents via Incremental Multi-Turn Interactions
Recent benchmarks for Large Language Model (LLM) agents primarily focus on evaluating reasoning, planning, and execution capabilities, while another critical component-memory, encompassing how agents memorize, update, and retrieve long-term information-is under-evaluated due to the lack of benchmarks. We term agents with memory mechanisms as memory agents. In this paper, we identify four core competencies essential for memory agents: accurate retrieval, test-time learning, long-range understanding, and conflict resolution. Existing datasets either rely on limited context lengths or are tailored for static, long-context settings like book-based QA, which do not reflect the interactive, multi-turn nature of memory agents that incrementally accumulate information. Furthermore, no existing benchmarks cover all four competencies. Therefore, we introduce MemoryAgentBench, a new benchmark specifically designed for memory agents. Our benchmark combines reformulated existing datasets with newly constructed ones, covering the above four memory competencies, providing a systematic and challenging testbed for assessing memory quality. We evaluate a diverse set of memory agents, ranging from simple context-based and retrieval-augmented generation (RAG) systems to advanced agents with external memory modules and tool integration. Empirical results reveal that current methods fall short of mastering all four competencies, underscoring the need for further research into comprehensive memory mechanisms for LLM agents.
null
https://arxiv.org/abs/2507.05257v1
https://arxiv.org/pdf/2507.05257v1.pdf
null
[ "Yuanzhe Hu", "Yu Wang", "Julian McAuley" ]
[ "Large Language Model", "RAG", "Retrieval", "Retrieval-augmented Generation" ]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/state-and-memory-is-all-you-need-for-robust
2507.00081
null
null
State and Memory is All You Need for Robust and Reliable AI Agents
Large language models (LLMs) have enabled powerful advances in natural language understanding and generation. Yet their application to complex, real-world scientific workflows remain limited by challenges in memory, planning, and tool integration. Here, we introduce SciBORG (Scientific Bespoke Artificial Intelligence Agents Optimized for Research Goals), a modular agentic framework that allows LLM-based agents to autonomously plan, reason, and achieve robust and reliable domain-specific task execution. Agents are constructed dynamically from source code documentation and augmented with finite-state automata (FSA) memory, enabling persistent state tracking and context-aware decision-making. This approach eliminates the need for manual prompt engineering and allows for robust, scalable deployment across diverse applications via maintaining context across extended workflows and to recover from tool or execution failures. We validate SciBORG through integration with both physical and virtual hardware, such as microwave synthesizers for executing user-specified reactions, with context-aware decision making and demonstrate its use in autonomous multi-step bioassay retrieval from the PubChem database utilizing multi-step planning, reasoning, agent-to-agent communication and coordination for execution of exploratory tasks. Systematic benchmarking shows that SciBORG agents achieve reliable execution, adaptive planning, and interpretable state transitions. Our results show that memory and state awareness are critical enablers of agentic planning and reliability, offering a generalizable foundation for deploying AI agents in complex environments.
null
https://arxiv.org/abs/2507.00081v1
https://arxiv.org/pdf/2507.00081v1.pdf
null
[ "Matthew Muhoberac", "Atharva Parikh", "Nirvi Vakharia", "Saniya Virani", "Aco Radujevic", "Savannah Wood", "Meghav Verma", "Dimitri Metaxotos", "Jeyaraman Soundararajan", "Thierry Masquelin", "Alexander G. Godfrey", "Sean Gardner", "Dobrila Rudnicki", "Sam Michael", "Gaurav Chopra" ]
[ "All", "Benchmarking", "Decision Making", "Natural Language Understanding", "Prompt Engineering" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-agent-retrieval-augmented-framework-for
2507.07307
null
null
Multi-Agent Retrieval-Augmented Framework for Evidence-Based Counterspeech Against Health Misinformation
Large language models (LLMs) incorporated with Retrieval-Augmented Generation (RAG) have demonstrated powerful capabilities in generating counterspeech against misinformation. However, current studies rely on limited evidence and offer less control over final outputs. To address these challenges, we propose a Multi-agent Retrieval-Augmented Framework to generate counterspeech against health misinformation, incorporating multiple LLMs to optimize knowledge retrieval, evidence enhancement, and response refinement. Our approach integrates both static and dynamic evidence, ensuring that the generated counterspeech is relevant, well-grounded, and up-to-date. Our method outperforms baseline approaches in politeness, relevance, informativeness, and factual accuracy, demonstrating its effectiveness in generating high-quality counterspeech. To further validate our approach, we conduct ablation studies to verify the necessity of each component in our framework. Furthermore, human evaluations reveal that refinement significantly enhances counterspeech quality and obtains human preference.
null
https://arxiv.org/abs/2507.07307v1
https://arxiv.org/pdf/2507.07307v1.pdf
null
[ "Anirban Saha Anik", "Xiaoying Song", "Elliott Wang", "Bryan Wang", "Bengisu Yarimbas", "Lingzi Hong" ]
[ "Informativeness", "Misinformation", "RAG", "Retrieval", "Retrieval-augmented Generation" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ai-vaxguide-an-agentic-rag-based-llm-for
2507.03493
null
null
AI-VaxGuide: An Agentic RAG-Based LLM for Vaccination Decisions
Vaccination plays a vital role in global public health, yet healthcare professionals often struggle to access immunization guidelines quickly and efficiently. National protocols and WHO recommendations are typically extensive and complex, making it difficult to extract precise information, especially during urgent situations. This project tackles that issue by developing a multilingual, intelligent question-answering system that transforms static vaccination guidelines into an interactive and user-friendly knowledge base. Built on a Retrieval-Augmented Generation (RAG) framework and enhanced with agent-based reasoning (Agentic RAG), the system provides accurate, context-sensitive answers to complex medical queries. Evaluation shows that Agentic RAG outperforms traditional methods, particularly in addressing multi-step or ambiguous questions. To support clinical use, the system is integrated into a mobile application designed for real-time, point-of-care access to essential vaccine information. AI-VaxGuide model is publicly available on https://huggingface.co/VaxGuide
null
https://arxiv.org/abs/2507.03493v1
https://arxiv.org/pdf/2507.03493v1.pdf
null
[ "Abdellah Zeggai", "Ilyes Traikia", "Abdelhak Lakehal", "Abdennour Boulesnane" ]
[ "Question Answering", "RAG", "Retrieval-augmented Generation" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/visualtrap-a-stealthy-backdoor-attack-on-gui
2507.06899
null
null
VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation
Graphical User Interface (GUI) agents powered by Large Vision-Language Models (LVLMs) have emerged as a revolutionary approach to automating human-machine interactions, capable of autonomously operating personal devices (e.g., mobile phones) or applications within the device to perform complex real-world tasks in a human-like manner. However, their close integration with personal devices raises significant security concerns, with many threats, including backdoor attacks, remaining largely unexplored. This work reveals that the visual grounding of GUI agent-mapping textual plans to GUI elements-can introduce vulnerabilities, enabling new types of backdoor attacks. With backdoor attack targeting visual grounding, the agent's behavior can be compromised even when given correct task-solving plans. To validate this vulnerability, we propose VisualTrap, a method that can hijack the grounding by misleading the agent to locate textual plans to trigger locations instead of the intended targets. VisualTrap uses the common method of injecting poisoned data for attacks, and does so during the pre-training of visual grounding to ensure practical feasibility of attacking. Empirical results show that VisualTrap can effectively hijack visual grounding with as little as 5% poisoned data and highly stealthy visual triggers (invisible to the human eye); and the attack can be generalized to downstream tasks, even after clean fine-tuning. Moreover, the injected trigger can remain effective across different GUI environments, e.g., being trained on mobile/web and generalizing to desktop environments. These findings underscore the urgent need for further research on backdoor attack risks in GUI agents.
null
https://arxiv.org/abs/2507.06899v1
https://arxiv.org/pdf/2507.06899v1.pdf
null
[ "Ziang Ye", "Yang Zhang", "Wentao Shi", "Xiaoyu You", "Fuli Feng", "Tat-Seng Chua" ]
[ "Backdoor Attack", "Visual Grounding" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/opentable-r1-a-reinforcement-learning
2507.03018
null
null
OpenTable-R1: A Reinforcement Learning Augmented Tool Agent for Open-Domain Table Question Answering
Open-domain table question answering traditionally relies on a two-stage pipeline: static table retrieval followed by a closed-domain answer. In contrast, we propose an end-to-end agentic framework that embeds multi-turn tool calls-using a BM25+-based search API and a SQLite SQL executor-directly into a large language model. To further adapt a compact 4B-parameter model, we introduce a two-stage fine-tuning process: supervised cold-start on easy questions, then Async GRPO reinforcement learning on harder cases with LoRA adapters and a rollout buffer. This unified approach enables the model to jointly retrieve, reason, and execute queries, yielding a dramatic accuracy improvement from single-digit zero-shot performance to over 0.86 exact match on a held-out test set. Our results underscore the effectiveness of integrating structured tool calls with targeted RL fine-tuning for scalable, accurate table QA. The code is available at https://github.com/TabibitoQZP/OpenTableR1.
null
https://arxiv.org/abs/2507.03018v1
https://arxiv.org/pdf/2507.03018v1.pdf
null
[ "Zipeng Qiu" ]
[ "Language Modeling", "Language Modelling", "Large Language Model", "Question Answering", "Table Retrieval" ]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lineretriever-planning-aware-observation
2507.00210
null
null
LineRetriever: Planning-Aware Observation Reduction for Web Agents
While large language models have demonstrated impressive capabilities in web navigation tasks, the extensive context of web pages, often represented as DOM or Accessibility Tree (AxTree) structures, frequently exceeds model context limits. Current approaches like bottom-up truncation or embedding-based retrieval lose critical information about page state and action history. This is particularly problematic for adaptive planning in web agents, where understanding the current state is essential for determining future actions. We hypothesize that embedding models lack sufficient capacity to capture plan-relevant information, especially when retrieving content that supports future action prediction. This raises a fundamental question: how can retrieval methods be optimized for adaptive planning in web navigation tasks? In response, we introduce \textit{LineRetriever}, a novel approach that leverages a language model to identify and retrieve observation lines most relevant to future navigation steps. Unlike traditional retrieval methods that focus solely on semantic similarity, \textit{LineRetriever} explicitly considers the planning horizon, prioritizing elements that contribute to action prediction. Our experiments demonstrate that \textit{LineRetriever} can reduce the size of the observation at each step for the web agent while maintaining consistent performance within the context limitations.
null
https://arxiv.org/abs/2507.00210v1
https://arxiv.org/pdf/2507.00210v1.pdf
null
[ "Imene Kerboua", "Sahar Omidi Shayegan", "Megh Thakkar", "Xing Han Lù", "Massimo Caccia", "Véronique Eglin", "Alexandre Aussem", "Jérémy Espinas", "Alexandre Lacoste" ]
[ "Retrieval", "Semantic Similarity", "Semantic Textual Similarity" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/automating-md-simulations-for-proteins-using
2507.07887
null
null
Automating MD simulations for Proteins using Large language Models: NAMD-Agent
Molecular dynamics simulations are an essential tool in understanding protein structure, dynamics, and function at the atomic level. However, preparing high quality input files for MD simulations can be a time consuming and error prone process. In this work, we introduce an automated pipeline that leverages Large Language Models (LLMs), specifically Gemini 2.0 Flash, in conjunction with python scripting and Selenium based web automation to streamline the generation of MD input files. The pipeline exploits CHARMM GUI's comprehensive web-based interface for preparing simulation-ready inputs for NAMD. By integrating Gemini's code generation and iterative refinement capabilities, simulation scripts are automatically written, executed, and revised to navigate CHARMM GUI, extract appropriate parameters, and produce the required NAMD input files. Post processing is performed using additional software to further refine the simulation outputs, thereby enabling a complete and largely hands free workflow. Our results demonstrate that this approach reduces setup time, minimizes manual errors, and offers a scalable solution for handling multiple protein systems in parallel. This automated framework paves the way for broader application of LLMs in computational structural biology, offering a robust and adaptable platform for future developments in simulation automation.
null
https://arxiv.org/abs/2507.07887v1
https://arxiv.org/pdf/2507.07887v1.pdf
null
[ "Achuth Chandrasekhar", "Amir Barati Farimani" ]
[ "Code Generation", "Navigate" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/translaw-benchmarking-large-language-models
2507.00875
null
null
TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation
Multi-agent systems empowered by large language models (LLMs) have demonstrated remarkable capabilities in a wide range of downstream applications, including machine translation. However, the potential of LLMs in translating Hong Kong legal judgments remains uncertain due to challenges such as intricate legal terminology, culturally embedded nuances, and strict linguistic structures. In this work, we introduce TransLaw, a novel multi-agent framework implemented for real-world Hong Kong case law translation. It employs three specialized agents, namely, Translator, Annotator, and Proofreader, to collaboratively produce translations for high accuracy in legal meaning, appropriateness in style, and adequate coherence and cohesion in structure. This framework supports customizable LLM configurations and achieves tremendous cost reduction compared to professional human translation services. We evaluated its performance using 13 open-source and commercial LLMs as agents and obtained interesting findings, including that it surpasses GPT-4o in legal semantic accuracy, structural coherence, and stylistic fidelity, yet trails human experts in contextualizing complex terminology and stylistic naturalness. Our platform website is available at CityUHK, and our bilingual judgment corpus used for the evaluation is available at Hugging Face.
null
https://arxiv.org/abs/2507.00875v1
https://arxiv.org/pdf/2507.00875v1.pdf
null
[ "Xi Xuan", "King-kui Sin", "Yufei Zhou", "Chunyu Kit" ]
[ "Benchmarking", "Machine Translation", "Translation" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/toward-real-world-chinese-psychological
2507.07509
null
null
Toward Real-World Chinese Psychological Support Dialogues: CPsDD Dataset and a Co-Evolving Multi-Agent System
The growing need for psychological support due to increasing pressures has exposed the scarcity of relevant datasets, particularly in non-English languages. To address this, we propose a framework that leverages limited real-world data and expert knowledge to fine-tune two large language models: Dialog Generator and Dialog Modifier. The Generator creates large-scale psychological counseling dialogues based on predefined paths, which guide system response strategies and user interactions, forming the basis for effective support. The Modifier refines these dialogues to align with real-world data quality. Through both automated and manual review, we construct the Chinese Psychological support Dialogue Dataset (CPsDD), containing 68K dialogues across 13 groups, 16 psychological problems, 13 causes, and 12 support focuses. Additionally, we introduce the Comprehensive Agent Dialogue Support System (CADSS), where a Profiler analyzes user characteristics, a Summarizer condenses dialogue history, a Planner selects strategies, and a Supporter generates empathetic responses. The experimental results of the Strategy Prediction and Emotional Support Conversation (ESC) tasks demonstrate that CADSS achieves state-of-the-art performance on both CPsDD and ESConv datasets.
null
https://arxiv.org/abs/2507.07509v1
https://arxiv.org/pdf/2507.07509v1.pdf
null
[ "Yuanchen Shi", "Longyin Zhang", "Fang Kong" ]
[]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rlver-reinforcement-learning-with-verifiable
2507.03112
null
null
RLVER: Reinforcement Learning with Verifiable Emotion Rewards for Empathetic Agents
Large language models (LLMs) excel at logical and algorithmic reasoning, yet their emotional intelligence (EQ) still lags far behind their cognitive prowess. While reinforcement learning from verifiable rewards (RLVR) has advanced in other domains, its application to dialogue-especially for emotional intelligence-remains underexplored. In this work, we introduce RLVER, the first end-to-end reinforcement learning framework that leverages verifiable emotion rewards from simulated users to cultivate higher-order empathetic abilities in LLMs. Within this framework, self-consistent affective simulated users engage in dialogue rollouts and produce deterministic emotion scores during conversations, serving as reward signals to guide the LLM's learning. Fine-tuning publicly available Qwen2.5-7B-Instruct model with PPO boosts its Sentient-Benchmark score from 13.3 to 79.2 while largely preserving mathematical and coding competence. Extensive experiments reveal that: (i) RLVER consistently improves multiple dialogue capabilities; (ii) Thinking and non-thinking models show distinct trends--thinking models excel in empathy and insight, while non-thinking models favor action; (iii) GRPO often yields stable gains, while PPO can push certain capabilities to a higher ceiling; (iv) More challenging environments are not always better-moderate ones can yield stronger outcomes. Our results show that RLVER is a practical route toward emotionally intelligent and broadly capable language agents.
null
https://arxiv.org/abs/2507.03112v1
https://arxiv.org/pdf/2507.03112v1.pdf
null
[ "Peisong Wang", "Ruotian Ma", "Bang Zhang", "Xingyu Chen", "Zhiwei He", "Kang Luo", "Qingsong Lv", "Qingxuan Jiang", "Zheng Xie", "Shanyi Wang", "Yuan Li", "Fanghua Ye", "Jian Li", "Yifan Yang", "Zhaopeng Tu", "Xiaolong Li" ]
[ "Emotional Intelligence", "reinforcement-learning", "Reinforcement Learning" ]
2025-07-03T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/stella-self-evolving-llm-agent-for-biomedical
2507.02004
null
null
STELLA: Self-Evolving LLM Agent for Biomedical Research
The rapid growth of biomedical data, tools, and literature has created a fragmented research landscape that outpaces human expertise. While AI agents offer a solution, they typically rely on static, manually curated toolsets, limiting their ability to adapt and scale. Here, we introduce STELLA, a self-evolving AI agent designed to overcome these limitations. STELLA employs a multi-agent architecture that autonomously improves its own capabilities through two core mechanisms: an evolving Template Library for reasoning strategies and a dynamic Tool Ocean that expands as a Tool Creation Agent automatically discovers and integrates new bioinformatics tools. This allows STELLA to learn from experience. We demonstrate that STELLA achieves state-of-the-art accuracy on a suite of biomedical benchmarks, scoring approximately 26\% on Humanity's Last Exam: Biomedicine, 54\% on LAB-Bench: DBQA, and 63\% on LAB-Bench: LitQA, outperforming leading models by up to 6 percentage points. More importantly, we show that its performance systematically improves with experience; for instance, its accuracy on the Humanity's Last Exam benchmark almost doubles with increased trials. STELLA represents a significant advance towards AI Agent systems that can learn and grow, dynamically scaling their expertise to accelerate the pace of biomedical discovery.
null
https://arxiv.org/abs/2507.02004v1
https://arxiv.org/pdf/2507.02004v1.pdf
null
[ "Ruofan Jin", "Zaixi Zhang", "Mengdi Wang", "Le Cong" ]
[ "AI Agent", "Humanity's Last Exam", "Self-Evolving AI" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/large-language-model-agent-for-modular-task
2507.02925
null
null
Large Language Model Agent for Modular Task Execution in Drug Discovery
We present a modular framework powered by large language models (LLMs) that automates and streamlines key tasks across the early-stage computational drug discovery pipeline. By combining LLM reasoning with domain-specific tools, the framework performs biomedical data retrieval, domain-specific question answering, molecular generation, property prediction, property-aware molecular refinement, and 3D protein-ligand structure generation. In a case study targeting BCL-2 in lymphocytic leukemia, the agent autonomously retrieved relevant biomolecular information-including FASTA sequences, SMILES representations, and literature-and answered mechanistic questions with improved contextual accuracy over standard LLMs. It then generated chemically diverse seed molecules and predicted 67 ADMET-related properties, which guided iterative molecular refinement. Across two refinement rounds, the number of molecules with QED > 0.6 increased from 34 to 55, and those passing at least four out of five empirical drug-likeness rules rose from 29 to 52, within a pool of 194 molecules. The framework also employed Boltz-2 to generate 3D protein-ligand complexes and provide rapid binding affinity estimates for candidate compounds. These results demonstrate that the approach effectively supports molecular screening, prioritization, and structure evaluation. Its modular design enables flexible integration of evolving tools and models, providing a scalable foundation for AI-assisted therapeutic discovery.
null
https://arxiv.org/abs/2507.02925v1
https://arxiv.org/pdf/2507.02925v1.pdf
null
[ "Janghoon Ock", "Radheesh Sharma Meda", "Srivathsan Badrinarayanan", "Neha S. Aluru", "Achuth Chandrasekhar", "Amir Barati Farimani" ]
[ "Drug Discovery", "Language Modeling", "Language Modelling", "Large Language Model", "Property Prediction", "Question Answering" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ecom-bench-can-llm-agent-resolve-real-world-e
2507.05639
null
null
ECom-Bench: Can LLM Agent Resolve Real-World E-commerce Customer Support Issues?
In this paper, we introduce ECom-Bench, the first benchmark framework for evaluating LLM agent with multimodal capabilities in the e-commerce customer support domain. ECom-Bench features dynamic user simulation based on persona information collected from real e-commerce customer interactions and a realistic task dataset derived from authentic e-commerce dialogues. These tasks, covering a wide range of business scenarios, are designed to reflect real-world complexities, making ECom-Bench highly challenging. For instance, even advanced models like GPT-4o achieve only a 10-20% pass^3 metric in our benchmark, highlighting the substantial difficulties posed by complex e-commerce scenarios. Upon publication, the code and data will be open-sourced to facilitate further research and development in this domain.
null
https://arxiv.org/abs/2507.05639v1
https://arxiv.org/pdf/2507.05639v1.pdf
null
[ "Haoxin Wang", "Xianhan Peng", "Xucheng Huang", "Yizhe Huang", "Ming Gong", "ChengHan Yang", "Yang Liu", "Ling Jiang" ]
[ "User Simulation" ]
2025-07-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mindflow-revolutionizing-e-commerce-customer
2507.05330
null
null
MindFlow: Revolutionizing E-commerce Customer Support with Multimodal LLM Agents
Recent advances in large language models (LLMs) have enabled new applications in e-commerce customer service. However, their capabilities remain constrained in complex, multimodal scenarios. We present MindFlow, the first open-source multimodal LLM agent tailored for e-commerce. Built on the CoALA framework, it integrates memory, decision-making, and action modules, and adopts a modular "MLLM-as-Tool" strategy for effect visual-textual reasoning. Evaluated via online A/B testing and simulation-based ablation, MindFlow demonstrates substantial gains in handling complex queries, improving user satisfaction, and reducing operational costs, with a 93.53% relative improvement observed in real-world deployments.
null
https://arxiv.org/abs/2507.05330v1
https://arxiv.org/pdf/2507.05330v1.pdf
null
[ "Ming Gong", "Xucheng Huang", "ChengHan Yang", "Xianhan Peng", "Haoxin Wang", "Yang Liu", "Ling Jiang" ]
[ "Decision Making" ]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sand-boosting-llm-agents-with-self-taught
2507.07441
null
null
SAND: Boosting LLM Agents with Self-Taught Action Deliberation
Large Language Model (LLM) agents are commonly tuned with supervised finetuning on ReAct-style expert trajectories or preference optimization over pairwise rollouts. Most of these methods focus on imitating specific expert behaviors or promoting chosen reasoning thoughts and actions over rejected ones. However, without reasoning and comparing over alternatives actions, LLM agents finetuned with these methods may over-commit towards seemingly plausible but suboptimal actions due to limited action space exploration. To address this, in this paper we propose Self-taught ActioN Deliberation (SAND) framework, enabling LLM agents to explicitly deliberate over candidate actions before committing to one. To tackle the challenges of when and what to deliberate given large action space and step-level action evaluation, we incorporate self-consistency action sampling and execution-guided action critique to help synthesize step-wise action deliberation thoughts using the base model of the LLM agent. In an iterative manner, the deliberation trajectories are then used to finetune the LLM agent itself. Evaluating on two representative interactive agent tasks, SAND achieves an average 20% improvement over initial supervised finetuning and also outperforms state-of-the-art agent tuning approaches.
null
https://arxiv.org/abs/2507.07441v1
https://arxiv.org/pdf/2507.07441v1.pdf
null
[ "Yu Xia", "Yiran Jenny Shen", "Junda Wu", "Tong Yu", "Sungchul Kim", "Ryan A. Rossi", "Lina Yao", "Julian McAuley" ]
[ "Large Language Model", "Sand" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/agentic-r1-distilled-dual-strategy-reasoning
2507.05707
null
null
Agentic-R1: Distilled Dual-Strategy Reasoning
Current long chain-of-thought (long-CoT) models excel at mathematical reasoning but rely on slow and error-prone natural language traces. Tool-augmented agents address arithmetic via code execution, but often falter on complex logical tasks. We introduce a fine-tuning framework, DualDistill, that distills complementary reasoning strategies from multiple teachers into a unified student model. Using this approach, we train Agentic-R1, which dynamically selects the optimal strategy for each query, invoking tools for arithmetic and algorithmic problems, and using text-based reasoning for abstract ones. Our method improves accuracy across a range of tasks, including both computation-intensive and standard benchmarks, demonstrating the effectiveness of multi-strategy distillation in achieving robust and efficient reasoning. Our project is available at https://github.com/StigLidu/DualDistill
null
https://arxiv.org/abs/2507.05707v1
https://arxiv.org/pdf/2507.05707v1.pdf
null
[ "Weihua Du", "Pranjal Aggarwal", "Sean Welleck", "Yiming Yang" ]
[ "Mathematical Reasoning" ]
2025-07-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/graft-a-graph-based-flow-aware-agentic
2507.03311
null
null
GRAFT: A Graph-based Flow-aware Agentic Framework for Document-level Machine Translation
Document level Machine Translation (DocMT) approaches often struggle with effectively capturing discourse level phenomena. Existing approaches rely on heuristic rules to segment documents into discourse units, which rarely align with the true discourse structure required for accurate translation. Otherwise, they fail to maintain consistency throughout the document during translation. To address these challenges, we propose Graph Augmented Agentic Framework for Document Level Translation (GRAFT), a novel graph based DocMT system that leverages Large Language Model (LLM) agents for document translation. Our approach integrates segmentation, directed acyclic graph (DAG) based dependency modelling, and discourse aware translation into a cohesive framework. Experiments conducted across eight translation directions and six diverse domains demonstrate that GRAFT achieves significant performance gains over state of the art DocMT systems. Specifically, GRAFT delivers an average improvement of 2.8 d BLEU on the TED test sets from IWSLT2017 over strong baselines and 2.3 d BLEU for domain specific translation from English to Chinese. Moreover, our analyses highlight the consistent ability of GRAFT to address discourse level phenomena, yielding coherent and contextually accurate translations.
null
https://arxiv.org/abs/2507.03311v1
https://arxiv.org/pdf/2507.03311v1.pdf
null
[ "Himanshu Dutta", "Sunny Manchanda", "Prakhar Bapat", "Meva Ram Gurjar", "Pushpak Bhattacharyya" ]
[ "Document Level Machine Translation", "Document Translation", "Large Language Model", "Machine Translation", "Translation" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-large-language-model-empowered-agent-for
2507.02938
null
null
A Large Language Model-Empowered Agent for Reliable and Robust Structural Analysis
Large language models (LLMs) have exhibited remarkable capabilities across diverse open-domain tasks, yet their application in specialized domains such as civil engineering remains largely unexplored. This paper starts bridging this gap by evaluating and enhancing the reliability and robustness of LLMs in structural analysis of beams. Reliability is assessed through the accuracy of correct outputs under repetitive runs of the same problems, whereas robustness is evaluated via the performance across varying load and boundary conditions. A benchmark dataset, comprising eight beam analysis problems, is created to test the Llama-3.3 70B Instruct model. Results show that, despite a qualitative understanding of structural mechanics, the LLM lacks the quantitative reliability and robustness for engineering applications. To address these limitations, a shift is proposed that reframes the structural analysis as code generation tasks. Accordingly, an LLM-empowered agent is developed that (a) integrates chain-of-thought and few-shot prompting to generate accurate OpeeSeesPy code, and (b) automatically executes the code to produce structural analysis results. Experimental results demonstrate that the agent achieves accuracy exceeding 99.0% on the benchmark dataset, exhibiting reliable and robust performance across diverse conditions. Ablation studies highlight the complete example and function usage examples as the primary contributors to the agent's enhanced performance.
null
https://arxiv.org/abs/2507.02938v1
https://arxiv.org/pdf/2507.02938v1.pdf
null
[ "Jiachen Liu", "Ziheng Geng", "Ran Cao", "Lu Cheng", "Paolo Bocchini", "Minghui Cheng" ]
[ "Code Generation", "Language Modeling", "Language Modelling", "Large Language Model" ]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/pun-intended-multi-agent-translation-of
2507.06506
null
null
Pun Intended: Multi-Agent Translation of Wordplay with Contrastive Learning and Phonetic-Semantic Embeddings
Translating wordplay across languages presents unique challenges that have long confounded both professional human translators and machine translation systems. This research proposes a novel approach for translating puns from English to French by combining state-of-the-art large language models with specialized techniques for wordplay generation. Our methodology employs a three-stage approach. First, we establish a baseline using multiple frontier large language models with feedback based on a new contrastive learning dataset. Second, we implement a guided chain-of-thought pipeline with combined phonetic-semantic embeddings. Third, we implement a multi-agent generator-discriminator framework for evaluating and regenerating puns with feedback. Moving beyond the limitations of literal translation, our methodology's primary objective is to capture the linguistic creativity and humor of the source text wordplay, rather than simply duplicating its vocabulary. Our best runs earned first and second place in the CLEF JOKER 2025 Task 2 competition where they were evaluated manually by expert native French speakers. This research addresses a gap between translation studies and computational linguistics by implementing linguistically-informed techniques for wordplay translation, advancing our understanding of how language models can be leveraged to handle the complex interplay between semantic ambiguity, phonetic similarity, and the implicit cultural and linguistic awareness needed for successful humor.
null
https://arxiv.org/abs/2507.06506v1
https://arxiv.org/pdf/2507.06506v1.pdf
null
[ "Russell Taylor", "Benjamin Herbert", "Michael Sana" ]
[ "Contrastive Learning", "Machine Translation", "Task 2", "Translation" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mind-a-multi-agent-framework-for-zero-shot
2507.06908
null
null
MIND: A Multi-agent Framework for Zero-shot Harmful Meme Detection
The rapid expansion of memes on social media has highlighted the urgent need for effective approaches to detect harmful content. However, traditional data-driven approaches struggle to detect new memes due to their evolving nature and the lack of up-to-date annotated data. To address this issue, we propose MIND, a multi-agent framework for zero-shot harmful meme detection that does not rely on annotated data. MIND implements three key strategies: 1) We retrieve similar memes from an unannotated reference set to provide contextual information. 2) We propose a bi-directional insight derivation mechanism to extract a comprehensive understanding of similar memes. 3) We then employ a multi-agent debate mechanism to ensure robust decision-making through reasoned arbitration. Extensive experiments on three meme datasets demonstrate that our proposed framework not only outperforms existing zero-shot approaches but also shows strong generalization across different model architectures and parameter scales, providing a scalable solution for harmful meme detection. The code is available at https://github.com/destroy-lonely/MIND.
null
https://arxiv.org/abs/2507.06908v1
https://arxiv.org/pdf/2507.06908v1.pdf
null
[ "Ziyan Liu", "Chunxiao Fan", "Haoran Lou", "Yuexin Wu", "Kaiwei Deng" ]
[]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ltlcrit-a-temporal-logic-based-llm-critic-for
2507.03293
null
null
LTLCrit: A Temporal Logic-based LLM Critic for Safe and Efficient Embodied Agents
Large language models (LLMs) have demonstrated promise in reasoning tasks and general decision-making in static environments. In long-term planning tasks, however, errors tend to accumulate, often leading to unsafe or inefficient behavior, limiting their use in general-purpose settings. We propose a modular actor-critic architecture in which an LLM actor is guided by LTLCrit, a trajectory-level LLM critic that communicates via linear temporal logic (LTL). Our setup combines the reasoning strengths of language models with the guarantees of formal logic. The actor selects high-level actions from natural language observations, while the critic analyzes full trajectories and proposes new LTL constraints that shield the actor from future unsafe or inefficient behavior. The architecture supports both fixed, hand-specified safety constraints and adaptive, learned soft constraints that promote long-term efficiency. Our architecture is model-agnostic: any LLM-based planner can serve as the actor, and LTLCrit serves as a logic-generating wrapper. We formalize planning as graph traversal under symbolic constraints, allowing LTLCrit to analyze failed or suboptimal trajectories and generate new temporal logic rules that improve future behavior. We evaluate our system on the Minecraft diamond-mining benchmark, achieving 100% completion rates and improving efficiency compared to baseline LLM planners. Our results suggest that enabling LLMs to supervise each other through logic is a powerful and flexible paradigm for safe, generalizable decision making.
null
https://arxiv.org/abs/2507.03293v1
https://arxiv.org/pdf/2507.03293v1.pdf
null
[ "Anand Gokhale", "Vaibhav Srivastava", "Francesco Bullo" ]
[ "Decision Making", "Formal Logic", "Minecraft" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gaf-guard-an-agentic-framework-for-risk
2507.02986
null
null
GAF-Guard: An Agentic Framework for Risk Management and Governance in Large Language Models
As Large Language Models (LLMs) continue to be increasingly applied across various domains, their widespread adoption necessitates rigorous monitoring to prevent unintended negative consequences and ensure robustness. Furthermore, LLMs must be designed to align with human values, like preventing harmful content and ensuring responsible usage. The current automated systems and solutions for monitoring LLMs in production are primarily centered on LLM-specific concerns like hallucination etc, with little consideration given to the requirements of specific use-cases and user preferences. This paper introduces GAF-Guard, a novel agentic framework for LLM governance that places the user, the use-case, and the model itself at the center. The framework is designed to detect and monitor risks associated with the deployment of LLM based applications. The approach models autonomous agents that identify risks, activate risk detection tools, within specific use-cases and facilitate continuous monitoring and reporting to enhance AI safety, and user expectations. The code is available at https://github.com/IBM/risk-atlas-nexus-demos/tree/main/gaf-guard.
null
https://arxiv.org/abs/2507.02986v2
https://arxiv.org/pdf/2507.02986v2.pdf
null
[ "Seshu Tirupathi", "Dhaval Salwala", "Elizabeth Daly", "Inge Vejsbjerg" ]
[ "Hallucination", "Management" ]
2025-07-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/malibu-benchmark-multi-agent-llm-implicit
2507.01019
null
null
MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered
Multi-agent systems, which consist of multiple AI models interacting within a shared environment, are increasingly used for persona-based interactions. However, if not carefully designed, these systems can reinforce implicit biases in large language models (LLMs), raising concerns about fairness and equitable representation. We present MALIBU, a novel benchmark developed to assess the degree to which LLM-based multi-agent systems implicitly reinforce social biases and stereotypes. MALIBU evaluates bias in LLM-based multi-agent systems through scenario-based assessments. AI models complete tasks within predefined contexts, and their responses undergo evaluation by an LLM-based multi-agent judging system in two phases. In the first phase, judges score responses labeled with specific demographic personas (e.g., gender, race, religion) across four metrics. In the second phase, judges compare paired responses assigned to different personas, scoring them and selecting the superior response. Our study quantifies biases in LLM-generated outputs, revealing that bias mitigation may favor marginalized personas over true neutrality, emphasizing the need for nuanced detection, balanced fairness strategies, and transparent evaluation benchmarks in multi-agent systems.
null
https://arxiv.org/abs/2507.01019v1
https://arxiv.org/pdf/2507.01019v1.pdf
null
[ "Imran Mirza", "Cole Huang", "Ishwara Vasista", "Rohan Patil", "Asli Akalin", "Sean O'Brien", "Kevin Zhu" ]
[ "Fairness" ]
2025-04-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/recon-answer-verify-agents-in-search-of-truth
2507.03671
null
null
Recon, Answer, Verify: Agents in Search of Truth
Automated fact checking with large language models (LLMs) offers a scalable alternative to manual verification. Evaluating fact checking is challenging as existing benchmark datasets often include post claim analysis and annotator cues, which are absent in real world scenarios where claims are fact checked immediately after being made. This limits the realism of current evaluations. We present Politi Fact Only (PFO), a 5 class benchmark dataset of 2,982 political claims from politifact.com, where all post claim analysis and annotator cues have been removed manually. This ensures that models are evaluated using only the information that would have been available prior to the claim's verification. Evaluating LLMs on PFO, we see an average performance drop of 22% in terms of macro f1 compared to PFO's unfiltered version. Based on the identified challenges of the existing LLM based fact checking system, we propose RAV (Recon Answer Verify), an agentic framework with three agents: question generator, answer generator, and label generator. Our pipeline iteratively generates and answers sub questions to verify different aspects of the claim before finally generating the label. RAV generalizes across domains and label granularities, and it outperforms state of the art approaches on well known baselines RAWFC (fact checking, 3 class) by 25.28%, and on HOVER (encyclopedia, 2 class) by 1.54% on 2 hop, 4.94% on 3 hop, and 1.78% on 4 hop, sub categories respectively. RAV shows the least performance drop compared to baselines of 16.3% in macro f1 when we compare PFO with its unfiltered version.
null
https://arxiv.org/abs/2507.03671v1
https://arxiv.org/pdf/2507.03671v1.pdf
null
[ "Satyam Shukla", "Himanshu Dutta", "Pushpak Bhattacharyya" ]
[ "Fact Checking" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/agent-based-detection-and-resolution-of
2507.03726
null
null
Agent-Based Detection and Resolution of Incompleteness and Ambiguity in Interactions with Large Language Models
Many of us now treat LLMs as modern-day oracles asking it almost any kind of question. However, consulting an LLM does not have to be a single turn activity. But long multi-turn interactions can get tedious if it is simply to clarify contextual information that can be arrived at through reasoning. In this paper, we examine the use of agent-based architecture to bolster LLM-based Question-Answering systems with additional reasoning capabilities. We examine the automatic resolution of potential incompleteness or ambiguities in questions by transducers implemented using LLM-based agents. We focus on several benchmark datasets that are known to contain questions with these deficiencies to varying degrees. We equip different LLMs (GPT-3.5-Turbo and Llama-4-Scout) with agents that act as specialists in detecting and resolving deficiencies of incompleteness and ambiguity. The agents are implemented as zero-shot ReAct agents. Rather than producing an answer in a single step, the model now decides between 3 actions a) classify b) resolve c) answer. Action a) decides if the question is incomplete, ambiguous, or normal. Action b) determines if any deficiencies identified can be resolved. Action c) answers the resolved form of the question. We compare the use of LLMs with and without the use of agents with these components. Our results show benefits of agents with transducer 1) A shortening of the length of interactions with human 2) An improvement in the answer quality and 3) Explainable resolution of deficiencies in the question. On the negative side we find while it may result in additional LLM invocations and in some cases, increased latency. But on tested datasets, the benefits outweigh the costs except when questions already have sufficient context. Suggesting the agent-based approach could be a useful mechanism to harness the power of LLMs to develop more robust QA systems.
null
https://arxiv.org/abs/2507.03726v1
https://arxiv.org/pdf/2507.03726v1.pdf
null
[ "Riya Naik", "Ashwin Srinivasan", "Swati Agarwal", "Estrid He" ]
[ "Question Answering" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/data-agent-a-holistic-architecture-for
2507.01599
null
null
Data Agent: A Holistic Architecture for Orchestrating Data+AI Ecosystems
Traditional Data+AI systems utilize data-driven techniques to optimize performance, but they rely heavily on human experts to orchestrate system pipelines, enabling them to adapt to changes in data, queries, tasks, and environments. For instance, while there are numerous data science tools available, developing a pipeline planning system to coordinate these tools remains challenging. This difficulty arises because existing Data+AI systems have limited capabilities in semantic understanding, reasoning, and planning. Fortunately, we have witnessed the success of large language models (LLMs) in enhancing semantic understanding, reasoning, and planning abilities. It is crucial to incorporate LLM techniques to revolutionize data systems for orchestrating Data+AI applications effectively. To achieve this, we propose the concept of a 'Data Agent' - a comprehensive architecture designed to orchestrate Data+AI ecosystems, which focuses on tackling data-related tasks by integrating knowledge comprehension, reasoning, and planning capabilities. We delve into the challenges involved in designing data agents, such as understanding data/queries/environments/tools, orchestrating pipelines/workflows, optimizing and executing pipelines, and fostering pipeline self-reflection. Furthermore, we present examples of data agent systems, including a data science agent, data analytics agents (such as unstructured data analytics agent, semantic structured data analytics agent, data lake analytics agent, and multi-modal data analytics agent), and a database administrator (DBA) agent. We also outline several open challenges associated with designing data agent systems.
null
https://arxiv.org/abs/2507.01599v1
https://arxiv.org/pdf/2507.01599v1.pdf
null
[ "Zhaoyan Sun", "Jiayi Wang", "Xinyang Zhao", "Jiachi Wang", "Guoliang Li" ]
[]
2025-07-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/m2-reasoning-empowering-mllms-with-unified
2507.08306
null
null
M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning
Recent advancements in Multimodal Large Language Models (MLLMs), particularly through Reinforcement Learning with Verifiable Rewards (RLVR), have significantly enhanced their reasoning abilities. However, a critical gap persists: these models struggle with dynamic spatial interactions, a capability essential for real-world applications. To bridge this gap, we introduce M2-Reasoning-7B, a model designed to excel in both general and spatial reasoning. Our approach integrates two key innovations: (1) a novel data pipeline that generates 294.2K high-quality data samples (168K for cold-start fine-tuning and 126.2K for RLVR), which feature logically coherent reasoning trajectories and have undergone comprehensive assessment; and (2) a dynamic multi-task training strategy with step-wise optimization to mitigate conflicts between data, and task-specific rewards for delivering tailored incentive signals. This combination of curated data and advanced training allows M2-Reasoning-7B to set a new state-of-the-art (SOTA) across 8 benchmarks, showcasing superior performance in both general and spatial reasoning domains.
null
https://arxiv.org/abs/2507.08306v1
https://arxiv.org/pdf/2507.08306v1.pdf
null
[ "Inclusion AI", ":", "Fudong Wang", "Jiajia Liu", "Jingdong Chen", "Jun Zhou", "Kaixiang Ji", "Lixiang Ru", "Qingpei Guo", "Ruobing Zheng", "Tianqi Li", "Yi Yuan", "Yifan Mao", "Yuting Xiao", "Ziping Ma" ]
[ "Spatial Reasoning" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-synergy-dilemma-of-long-cot-sft-and-rl
2507.07562
null
null
The Synergy Dilemma of Long-CoT SFT and RL: Investigating Post-Training Techniques for Reasoning VLMs
Large vision-language models (VLMs) increasingly adopt post-training techniques such as long chain-of-thought (CoT) supervised fine-tuning (SFT) and reinforcement learning (RL) to elicit sophisticated reasoning. While these methods exhibit synergy in language-only models, their joint effectiveness in VLMs remains uncertain. We present a systematic investigation into the distinct roles and interplay of long-CoT SFT and RL across multiple multimodal reasoning benchmarks. We find that SFT improves performance on difficult questions by in-depth, structured reasoning, but introduces verbosity and degrades performance on simpler ones. In contrast, RL promotes generalization and brevity, yielding consistent improvements across all difficulty levels, though the improvements on the hardest questions are less prominent compared to SFT. Surprisingly, combining them through two-staged, interleaved, or progressive training strategies, as well as data mixing and model merging, all fails to produce additive benefits, instead leading to trade-offs in accuracy, reasoning style, and response length. This ``synergy dilemma'' highlights the need for more seamless and adaptive approaches to unlock the full potential of combined post-training techniques for reasoning VLMs.
null
https://arxiv.org/abs/2507.07562v1
https://arxiv.org/pdf/2507.07562v1.pdf
null
[ "Jierun Chen", "Tiezheng Yu", "Haoli Bai", "Lewei Yao", "Jiannan Wu", "Kaican Li", "Fei Mi", "Chaofan Tao", "Lei Zhu", "Manyi Zhang", "Xiaohui Li", "Lu Hou", "Lifeng Shang", "Qun Liu" ]
[ "Multimodal Reasoning", "Reinforcement Learning (RL)" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/perception-aware-policy-optimization-for
2507.06448
null
null
Perception-Aware Policy Optimization for Multimodal Reasoning
Reinforcement Learning with Verifiable Rewards (RLVR) has proven to be a highly effective strategy for endowing Large Language Models (LLMs) with robust multi-step reasoning abilities. However, its design and optimizations remain tailored to purely textual domains, resulting in suboptimal performance when applied to multimodal reasoning tasks. In particular, we observe that a major source of error in current multimodal reasoning lies in the perception of visual inputs. To address this bottleneck, we propose Perception-Aware Policy Optimization (PAPO), a simple yet effective extension of GRPO that encourages the model to learn to perceive while learning to reason, entirely from internal supervision signals. Notably, PAPO does not rely on additional data curation, external reward models, or proprietary models. Specifically, we introduce the Implicit Perception Loss in the form of a KL divergence term to the GRPO objective, which, despite its simplicity, yields significant overall improvements (4.4%) on diverse multimodal benchmarks. The improvements are more pronounced, approaching 8.0%, on tasks with high vision dependency. We also observe a substantial reduction (30.5%) in perception errors, indicating improved perceptual capabilities with PAPO. We conduct comprehensive analysis of PAPO and identify a unique loss hacking issue, which we rigorously analyze and mitigate through a Double Entropy Loss. Overall, our work introduces a deeper integration of perception-aware supervision into RLVR learning objectives and lays the groundwork for a new RL framework that encourages visually grounded reasoning. Project page: https://mikewangwzhl.github.io/PAPO.
null
https://arxiv.org/abs/2507.06448v2
https://arxiv.org/pdf/2507.06448v2.pdf
null
[ "Zhenhailong Wang", "Xuehang Guo", "Sofia Stoica", "Haiyang Xu", "Hongru Wang", "Hyeonjeong Ha", "Xiusi Chen", "Yangyi Chen", "Ming Yan", "Fei Huang", "Heng Ji" ]
[ "Multimodal Reasoning" ]
2025-07-08T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/open-vision-reasoner-transferring-linguistic
2507.05255
null
null
Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning
The remarkable reasoning capability of large language models (LLMs) stems from cognitive behaviors that emerge through reinforcement with verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock advanced visual reasoning. We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive linguistic cold-start fine-tuning, followed by multimodal reinforcement learning (RL) spanning nearly 1,000 steps, surpassing all previous open-source efforts in scale. This pioneering work reveals three fundamental insights: 1) Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery. 2) Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns. 3) Transfer strategically favors high-utility behaviors such as visual reflection. Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including 95.3% on MATH500, 51.8% on MathVision and 54.6% on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners.
null
https://arxiv.org/abs/2507.05255v1
https://arxiv.org/pdf/2507.05255v1.pdf
null
[ "Yana Wei", "Liang Zhao", "Jianjian Sun", "Kangheng Lin", "Jisheng Yin", "Jingcheng Hu", "Yinmin Zhang", "En Yu", "Haoran Lv", "Zejia Weng", "Jia Wang", "Chunrui Han", "Yuang Peng", "Qi Han", "Zheng Ge", "Xiangyu Zhang", "Daxin Jiang", "Vishal M. Patel" ]
[ "Reinforcement Learning (RL)", "Visual Reasoning" ]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mico-multi-image-contrast-for-reinforcement
2506.22434
null
null
MiCo: Multi-image Contrast for Reinforcement Visual Reasoning
This work explores enabling Chain-of-Thought (CoT) reasoning to link visual cues across multiple images. A straightforward solution is to adapt rule-based reinforcement learning for Vision-Language Models (VLMs). However, such methods typically rely on manually curated question-answer pairs, which can be particularly challenging when dealing with fine grained visual details and complex logic across images. Inspired by self-supervised visual representation learning, we observe that images contain inherent constraints that can serve as supervision. Based on this insight, we construct image triplets comprising two augmented views of the same image and a third, similar but distinct image. During training, the model is prompted to generate a reasoning process to compare these images (i.e., determine same or different). Then we optimize the model with rule-based reinforcement learning. Due to the high visual similarity and the presence of augmentations, the model must attend to subtle visual changes and perform logical reasoning to succeed. Experiments show that, although trained solely on visual comparison tasks, the learned reasoning ability generalizes effectively to a wide range of questions. Without relying on any human-annotated question-answer pairs, our method achieves significant improvements on multi-image reasoning benchmarks and shows strong performance on general vision tasks.
null
https://arxiv.org/abs/2506.22434v1
https://arxiv.org/pdf/2506.22434v1.pdf
null
[ "Xi Chen", "Mingkang Zhu", "Shaoteng Liu", "Xiaoyang Wu", "Xiaogang Xu", "Yu Liu", "Xiang Bai", "Hengshuang Zhao" ]
[ "Logical Reasoning", "Representation Learning", "Visual Reasoning" ]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/visual-structures-helps-visual-reasoning
2506.22146
null
null
Visual Structures Helps Visual Reasoning: Addressing the Binding Problem in VLMs
Despite progress in Vision-Language Models (VLMs), their capacity for visual reasoning is often limited by the \textit{binding problem}: the failure to reliably associate perceptual features with their correct visual referents. This limitation underlies persistent errors in tasks such as counting, visual search, scene description, and spatial relationship understanding. A key factor is that current VLMs process visual features largely in parallel, lacking mechanisms for spatially grounded, serial attention. This paper introduces a simple yet effective intervention: augmenting visual inputs with low-level spatial structures (e.g., horizontal lines) and pairing this with a textual prompt that encourages sequential, spatially-aware parsing. We empirically demonstrate substantial performance improvements across core visual reasoning tasks. Specifically, our method improves GPT-4o visual search accuracy by 25.00%, increases counting accuracy by 26.83%, reduces edit distance error in scene description by 0.32, and enhances performance on spatial relationship tasks by 9.50% on a a 2D synthetic dataset. Furthermore, we find that the visual modification is essential for these gains; purely textual strategies, including Chain-of-Thought prompting, are insufficient and can even degrade performance. Our method enhances binding only with a single-query inference, underscoring the importance of visual input design over purely linguistically-based approaches. These findings suggest that low-level visual structuring is a powerful and underexplored direction for improving compositional visual reasoning and could serve as a general strategy for enhancing VLM performance on spatially grounded tasks.
null
https://arxiv.org/abs/2506.22146v2
https://arxiv.org/pdf/2506.22146v2.pdf
null
[ "Amirmohammad Izadi", "Mohammad Ali Banayeeanzade", "Fatemeh Askari", "Ali Rahimiakbar", "Mohammad Mahdi Vahedi", "Hosein Hasani", "Mahdieh Soleymani Baghshah" ]
[ "Visual Reasoning" ]
2025-06-27T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/apo-enhancing-reasoning-ability-of-mllms-via
2506.21655
null
null
APO: Enhancing Reasoning Ability of MLLMs via Asymmetric Policy Optimization
Multimodal Large Language Models (MLLMs) are powerful at integrating diverse data, but they often struggle with complex reasoning. While Reinforcement learning (RL) can boost reasoning in LLMs, applying it to MLLMs is tricky. Common issues include a drop in performance on general tasks and the generation of overly detailed or "overthinking" reasoning. Our work investigates how the KL penalty and overthinking affect RL training in MLLMs. We propose Asymmetric Policy Optimization (APO) to address these issues, which divides the sampled responses into positive and negative groups. For positive samples, Difficulty-Adaptive Divergence Shaping (DADS) is introduced to dynamically adjust the KL divergence weight based on their difficulty. This method prevents policy entropy from dropping sharply, improves training stability, utilizes samples better, and preserves the model's existing knowledge. For negative samples, Suboptimal Trajectory Complexity Regularization (STCR) is proposed to penalize overly long responses. This helps mitigate overthinking and encourages more concise reasoning while preserving the model's explorative capacity. We apply our method to Qwen2.5-VL-3B, creating View-R1-3B. View-R1-3B significantly enhances reasoning capabilities, showing an average 7\% gain over the base model and outperforming larger MLLMs (7-11B) on various reasoning benchmarks. Importantly, unlike other reasoning-tuned MLLMs that often degrade on general tasks, View-R1-3B maintains consistent improvement, demonstrating superior generalization. These results highlight the effectiveness and broad applicability of our DADS and STCR techniques for advancing complex multimodal reasoning in MLLMs. The code will be made available at https://github.com/Indolent-Kawhi/View-R1.
null
https://arxiv.org/abs/2506.21655v1
https://arxiv.org/pdf/2506.21655v1.pdf
null
[ "Minjie Hong", "Zirun Guo", "Yan Xia", "Zehan Wang", "Ziang Zhang", "Tao Jin", "Zhou Zhao" ]
[ "Multimodal Reasoning", "Reinforcement Learning (RL)" ]
2025-06-26T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/perl-permutation-enhanced-reinforcement
2506.14907
null
null
PeRL: Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning
Inspired by the impressive reasoning capabilities demonstrated by reinforcement learning approaches like DeepSeek-R1, recent emerging research has begun exploring the use of reinforcement learning (RL) to enhance vision-language models (VLMs) for multimodal reasoning tasks. However, most existing multimodal reinforcement learning approaches remain limited to spatial reasoning within single-image contexts, yet still struggle to generalize to more complex and real-world scenarios involving multi-image positional reasoning, where understanding the relationships across images is crucial. To address this challenge, we propose a general reinforcement learning approach PeRL tailored for interleaved multimodal tasks, and a multi-stage strategy designed to enhance the exploration-exploitation trade-off, thereby improving learning efficiency and task performance. Specifically, we introduce permutation of image sequences to simulate varied positional relationships to explore more spatial and positional diversity. Furthermore, we design a rollout filtering mechanism for resampling to focus on trajectories that contribute most to learning optimal behaviors to exploit learned policies effectively. We evaluate our model on 5 widely-used multi-image benchmarks and 3 single-image benchmarks. Our experiments confirm that PeRL trained model consistently surpasses R1-related and interleaved VLM baselines by a large margin, achieving state-of-the-art performance on multi-image benchmarks, while preserving comparable performance on single-image tasks.
null
https://arxiv.org/abs/2506.14907v1
https://arxiv.org/pdf/2506.14907v1.pdf
null
[ "Yizhen Zhang", "Yang Ding", "Shuoshuo Zhang", "Xinchen Zhang", "Haoling Li", "Zhong-Zhi Li", "Peijie Wang", "Jie Wu", "Lei Ji", "Yelong Shen", "Yujiu Yang", "Yeyun Gong" ]
[ "General Reinforcement Learning", "Multimodal Reasoning", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)", "Spatial Reasoning" ]
2025-06-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mm-r5-multimodal-reasoning-enhanced-reranker
2506.12364
null
null
MM-R5: MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval
Multimodal document retrieval systems enable information access across text, images, and layouts, benefiting various domains like document-based question answering, report analysis, and interactive content summarization. Rerankers improve retrieval precision by reordering retrieved candidates. However, current multimodal reranking methods remain underexplored, with significant room for improvement in both training strategies and overall effectiveness. Moreover, the lack of explicit reasoning makes it difficult to analyze and optimize these methods further. In this paper, We propose MM-R5, a MultiModal Reasoning-Enhanced ReRanker via Reinforcement Learning for Document Retrieval, aiming to provide a more effective and reliable solution for multimodal reranking tasks. MM-R5 is trained in two stages: supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we focus on improving instruction-following and guiding the model to generate complete and high-quality reasoning chains. To support this, we introduce a novel data construction strategy that produces rich, high-quality reasoning data. In the RL stage, we design a task-specific reward framework, including a reranking reward tailored for multimodal candidates and a composite template-based reward to further refine reasoning quality. We conduct extensive experiments on MMDocIR, a challenging public benchmark spanning multiple domains. MM-R5 achieves state-of-the-art performance on most metrics and delivers comparable results to much larger models on the remaining ones. Moreover, compared to the best retrieval-only method, MM-R5 improves recall@1 by over 4%. These results validate the effectiveness of our reasoning-enhanced training pipeline. Our code is available at https://github.com/i2vec/MM-R5 .
null
https://arxiv.org/abs/2506.12364v2
https://arxiv.org/pdf/2506.12364v2.pdf
null
[ "Mingjun Xu", "Jinhan Dong", "Jue Hou", "Zehui Wang", "Sihang Li", "Zhifeng Gao", "Renxin Zhong", "Hengxing Cai" ]
[ "Instruction Following", "Multimodal Reasoning", "Question Answering", "Reinforcement Learning (RL)", "Reranking", "Retrieval" ]
2025-06-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/scaling-rl-to-long-videos
2507.07966
null
null
Scaling RL to Long Videos
We introduce a full-stack framework that scales up reasoning in vision-language models (VLMs) to long videos, leveraging reinforcement learning. We address the unique challenges of long video reasoning by integrating three critical components: (1) a large-scale dataset, LongVideo-Reason, comprising 52K long video QA pairs with high-quality reasoning annotations across diverse domains such as sports, games, and vlogs; (2) a two-stage training pipeline that extends VLMs with chain-of-thought supervised fine-tuning (CoT-SFT) and reinforcement learning (RL); and (3) a training infrastructure for long video RL, named Multi-modal Reinforcement Sequence Parallelism (MR-SP), which incorporates sequence parallelism and a vLLM-based engine tailored for long video, using cached video embeddings for efficient rollout and prefilling. In experiments, LongVILA-R1-7B achieves strong performance on long video QA benchmarks such as VideoMME. It also outperforms Video-R1-7B and even matches Gemini-1.5-Pro across temporal reasoning, goal and purpose reasoning, spatial reasoning, and plot reasoning on our LongVideo-Reason-eval benchmark. Notably, our MR-SP system achieves up to 2.1x speedup on long video RL training. LongVILA-R1 demonstrates consistent performance gains as the number of input video frames scales. LongVILA-R1 marks a firm step towards long video reasoning in VLMs. In addition, we release our training system for public availability that supports RL training on various modalities (video, text, and audio), various models (VILA and Qwen series), and even image and video generation models. On a single A100 node (8 GPUs), it supports RL training on hour-long videos (e.g., 3,600 frames / around 256k tokens).
null
https://arxiv.org/abs/2507.07966v1
https://arxiv.org/pdf/2507.07966v1.pdf
null
[ "Yukang Chen", "Wei Huang", "Baifeng Shi", "Qinghao Hu", "Hanrong Ye", "Ligeng Zhu", "Zhijian Liu", "Pavlo Molchanov", "Jan Kautz", "Xiaojuan Qi", "Sifei Liu", "Hongxu Yin", "Yao Lu", "Song Han" ]
[ "Reinforcement Learning (RL)", "Spatial Reasoning", "Video Generation" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/grpo-care-consistency-aware-reinforcement
2506.16141
null
null
GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal Reasoning
Recent reinforcement learning approaches, such as outcome-supervised GRPO, have advanced Chain-of-Thought reasoning in large language models (LLMs), yet their adaptation to multimodal LLMs (MLLMs) is unexplored. To address the lack of rigorous evaluation for MLLM post-training methods, we introduce SEED-Bench-R1, a benchmark with complex real-world videos requiring balanced perception and reasoning. It offers a large training set and evaluates generalization across three escalating challenges: in-distribution, cross-environment, and cross-environment-task scenarios. Using SEED-Bench-R1, we find that standard GRPO, while improving answer accuracy, often reduces logical coherence between reasoning steps and answers, with only a 57.9% consistency rate. This stems from reward signals focusing solely on final answers, encouraging shortcuts, and strict KL penalties limiting exploration.To address this, we propose GRPO-CARE, a consistency-aware RL framework optimizing both answer correctness and reasoning coherence without explicit supervision. GRPO-CARE introduces a two-tiered reward: (1) a base reward for answer correctness, and (2) an adaptive consistency bonus, computed by comparing the model's reasoning-to-answer likelihood (via a slowly-evolving reference model) against group peers.This dual mechanism amplifies rewards for reasoning paths that are both correct and logically consistent. Replacing KL penalties with this adaptive bonus, GRPO-CARE outperforms standard GRPO on SEED-Bench-R1, achieving a 6.7% performance gain on the hardest evaluation level and a 24.5% improvement in consistency. It also shows strong transferability, improving model performance across diverse video understanding benchmarks. Our work contributes a systematically designed benchmark and a generalizable post-training framework, advancing the development of more interpretable and robust MLLMs.
null
https://arxiv.org/abs/2506.16141v1
https://arxiv.org/pdf/2506.16141v1.pdf
null
[ "Yi Chen", "Yuying Ge", "Rui Wang", "Yixiao Ge", "Junhao Cheng", "Ying Shan", "Xihui Liu" ]
[ "Multimodal Reasoning", "reinforcement-learning", "Reinforcement Learning", "Video Understanding" ]
2025-06-19T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/versavid-r1-a-versatile-video-understanding
2506.09079
null
null
VersaVid-R1: A Versatile Video Understanding and Reasoning Model from Question Answering to Captioning Tasks
Recent advancements in multimodal large language models have successfully extended the Reason-Then-Respond paradigm to image-based reasoning, yet video-based reasoning remains an underdeveloped frontier, primarily due to the scarcity of high-quality reasoning-oriented data and effective training methodologies. To bridge this gap, we introduce DarkEventInfer and MixVidQA, two novel datasets specifically designed to stimulate the model's advanced video understanding and reasoning abilities. DarkEventinfer presents videos with masked event segments, requiring models to infer the obscured content based on contextual video cues. MixVidQA, on the other hand, presents interleaved video sequences composed of two distinct clips, challenging models to isolate and reason about one while disregarding the other. Leveraging these carefully curated training samples together with reinforcement learning guided by diverse reward functions, we develop VersaVid-R1, the first versatile video understanding and reasoning model under the Reason-Then-Respond paradigm capable of handling multiple-choice and open-ended question answering, as well as video captioning tasks. Extensive experiments demonstrate that VersaVid-R1 significantly outperforms existing models across a broad spectrum of benchmarks, covering video general understanding, cognitive reasoning, and captioning tasks.
null
https://arxiv.org/abs/2506.09079v1
https://arxiv.org/pdf/2506.09079v1.pdf
null
[ "Xinlong Chen", "Yuanxing Zhang", "Yushuo Guan", "Bohan Zeng", "Yang Shi", "Sihan Yang", "Pengfei Wan", "Qiang Liu", "Liang Wang", "Tieniu Tan" ]
[ "Multiple-choice", "Open-Ended Question Answering", "Question Answering", "Video Captioning", "Video Understanding" ]
2025-06-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/reagent-v-a-reward-driven-multi-agent
2506.01300
null
null
ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding
Video understanding is fundamental to tasks such as action recognition, video reasoning, and robotic control. Early video understanding methods based on large vision-language models (LVLMs) typically adopt a single-pass reasoning paradigm without dynamic feedback, limiting the model's capacity to self-correct and adapt in complex scenarios. Recent efforts have attempted to address this limitation by incorporating reward models and reinforcement learning to enhance reasoning, or by employing tool-agent frameworks. However, these approaches face several challenges, including high annotation costs, reward signals that fail to capture real-time reasoning states, and low inference efficiency. To overcome these issues, we propose ReAgent-V, a novel agentic video understanding framework that integrates efficient frame selection with real-time reward generation during inference. These reward signals not only guide iterative answer refinement through a multi-perspective reflection mechanism-adjusting predictions from conservative, neutral, and aggressive viewpoints-but also enable automatic filtering of high-quality data for supervised fine-tuning (SFT), direct preference optimization (DPO), and group relative policy optimization (GRPO). ReAgent-V is lightweight, modular, and extensible, supporting flexible tool integration tailored to diverse tasks. Extensive experiments on 12 datasets across three core applications-video understanding, video reasoning enhancement, and vision-language-action model alignment-demonstrate significant gains in generalization and reasoning, with improvements of up to 6.9%, 2.1%, and 9.8%, respectively, highlighting the effectiveness and versatility of the proposed framework.
null
https://arxiv.org/abs/2506.01300v1
https://arxiv.org/pdf/2506.01300v1.pdf
null
[ "Yiyang Zhou", "Yangfan He", "Yaofeng Su", "Siwei Han", "Joel Jang", "Gedas Bertasius", "Mohit Bansal", "Huaxiu Yao" ]
[ "Action Recognition", "Video Understanding", "Vision-Language-Action" ]
2025-06-02T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/listener-rewarded-thinking-in-vlms-for-image
2506.22832
null
null
Listener-Rewarded Thinking in VLMs for Image Preferences
Training robust and generalizable reward models for human visual preferences is essential for aligning text-to-image and text-to-video generative models with human intent. However, current reward models often fail to generalize, and supervised fine-tuning leads to memorization, demanding complex annotation pipelines. While reinforcement learning (RL), specifically Group Relative Policy Optimization (GRPO), improves generalization, we uncover a key failure mode: a significant drop in reasoning accuracy occurs when a model's reasoning trace contradicts that of an independent, frozen vision-language model ("listener") evaluating the same output. To address this, we introduce a listener-augmented GRPO framework. Here, the listener re-evaluates the reasoner's chain-of-thought to provide a dense, calibrated confidence score, shaping the RL reward signal. This encourages the reasoner not only to answer correctly, but to produce explanations that are persuasive to an independent model. Our listener-shaped reward scheme achieves best accuracy on the ImageReward benchmark (67.4%), significantly improves out-of-distribution (OOD) performance on a large-scale human preference dataset (1.2M votes, up to +6% over naive reasoner), and reduces reasoning contradictions compared to strong GRPO and SFT baselines. These results demonstrate that listener-based rewards provide a scalable, data-efficient path to aligning vision-language models with nuanced human preferences. We will release our reasoning model here: https://huggingface.co/alexgambashidze/qwen2.5vl_image_preference_reasoner.
null
https://arxiv.org/abs/2506.22832v2
https://arxiv.org/pdf/2506.22832v2.pdf
null
[ "Alexander Gambashidze", "Li Pengyi", "Matvey Skripkin", "Andrey Galichin", "Anton Gusarov", "Konstantin Sobolev", "Andrey Kuznetsov", "Ivan Oseledets" ]
[ "Memorization", "Reinforcement Learning (RL)" ]
2025-06-28T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mmreason-an-open-ended-multi-modal-multi-step
2506.23563
null
null
MMReason: An Open-Ended Multi-Modal Multi-Step Reasoning Benchmark for MLLMs Toward AGI
Reasoning plays a crucial role in advancing Multimodal Large Language Models (MLLMs) toward Artificial General Intelligence. However, existing MLLM benchmarks often fall short in precisely and comprehensively evaluating long-chain reasoning abilities from three key aspects: (1) lack of difficulty and diversity, (2) susceptibility to guessability and memorization, (3) inadequate assessment of intermediate reasoning steps. To fill this gap, we introduce MMReason, a new benchmark designed to precisely and comprehensively evaluate MLLM long-chain reasoning capability with diverse, open-ended, challenging questions. First, we curate challenging questions requiring multi-step reasoning from various fields (i.e., 6 disciplines) and multiple difficulty levels (i.e., from pre-university to university, and from foundational to competition tiers). Second, these questions are reformulated into an open-ended format and filtered using a multi-model voting technique to eliminate shortcut cases related to guessing and memorization, ensuring robust reasoning evaluations. Third, we annotate the questions with detailed step-by-step solutions, and design a reference-based ternary scoring mechanism to reliably assess intermediate reasoning steps. With MMReason, we benchmark popular leading MLLMs and provide an in-depth analysis of their reasoning capabilities. We hope MMReason will serve as a valuable resource for advancing MLLM reasoning research. Code will be available at https://github.com/HJYao00/MMReason.
null
https://arxiv.org/abs/2506.23563v1
https://arxiv.org/pdf/2506.23563v1.pdf
null
[ "Huanjin Yao", "Jiaxing Huang", "Yawen Qiu", "Michael K. Chen", "Wenzheng Liu", "Wei zhang", "Wenjie Zeng", "Xikun Zhang", "Jingyi Zhang", "Yuxin Song", "Wenhao Wu", "DaCheng Tao" ]
[ "Memorization" ]
2025-06-30T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/flexselect-flexible-token-selection-for
2506.00993
null
null
FlexSelect: Flexible Token Selection for Efficient Long Video Understanding
Long-form video understanding poses a significant challenge for video large language models (VideoLLMs) due to prohibitively high computational and memory demands. In this paper, we propose FlexSelect, a flexible and efficient token selection strategy for processing long videos. FlexSelect identifies and retains the most semantically relevant content by leveraging cross-modal attention patterns from a reference transformer layer. It comprises two key components: (1) a training-free token ranking pipeline that leverages faithful cross-modal attention weights to estimate each video token's importance, and (2) a rank-supervised lightweight selector that is trained to replicate these rankings and filter redundant tokens. This generic approach can be seamlessly integrated into various VideoLLM architectures, such as LLaVA-Video, InternVL and Qwen-VL, serving as a plug-and-play module to extend their temporal context length. Empirically, FlexSelect delivers strong gains across multiple long-video benchmarks including VideoMME, MLVU, LongVB, and LVBench. Moreover, it achieves significant speed-ups (for example, up to 9 times on a LLaVA-Video-7B model), highlighting FlexSelect's promise for efficient long-form video understanding. Project page available at: https://yunzhuzhang0918.github.io/flex_select
null
https://arxiv.org/abs/2506.00993v1
https://arxiv.org/pdf/2506.00993v1.pdf
null
[ "Yunzhu Zhang", "Yu Lu", "Tianyi Wang", "Fengyun Rao", "Yi Yang", "Linchao Zhu" ]
[ "Video Understanding" ]
2025-06-01T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evoagentx-an-automated-framework-for-evolving
2507.03616
null
null
EvoAgentX: An Automated Framework for Evolving Agentic Workflows
Multi-agent systems (MAS) have emerged as a powerful paradigm for orchestrating large language models (LLMs) and specialized tools to collaboratively address complex tasks. However, existing MAS frameworks often require manual workflow configuration and lack native support for dynamic evolution and performance optimization. In addition, many MAS optimization algorithms are not integrated into a unified framework. In this paper, we present EvoAgentX, an open-source platform that automates the generation, execution, and evolutionary optimization of multi-agent workflows. EvoAgentX employs a modular architecture consisting of five core layers: the basic components, agent, workflow, evolving, and evaluation layers. Specifically, within the evolving layer, EvoAgentX integrates three MAS optimization algorithms, TextGrad, AFlow, and MIPRO, to iteratively refine agent prompts, tool configurations, and workflow topologies. We evaluate EvoAgentX on HotPotQA, MBPP, and MATH for multi-hop reasoning, code generation, and mathematical problem solving, respectively, and further assess it on real-world tasks using GAIA. Experimental results show that EvoAgentX consistently achieves significant performance improvements, including a 7.44% increase in HotPotQA F1, a 10.00% improvement in MBPP pass@1, a 10.00% gain in MATH solve accuracy, and an overall accuracy improvement of up to 20.00% on GAIA. The source code is available at: https://github.com/EvoAgentX/EvoAgentX
null
https://arxiv.org/abs/2507.03616v1
https://arxiv.org/pdf/2507.03616v1.pdf
null
[ "Yingxu Wang", "Siwei Liu", "Jinyuan Fang", "Zaiqiao Meng" ]
[ "Code Generation", "Math", "Mathematical Problem-Solving", "mbpp" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/skip-a-layer-or-loop-it-test-time-depth
2507.07996
null
null
Skip a Layer or Loop it? Test-Time Depth Adaptation of Pretrained LLMs
Can a pretrained neural network adapt its architecture to different inputs without any finetuning? Do we need all layers for simple tasks, and are they adequate for challenging tasks? We found that the layers of a pretrained large language model (LLM) can be manipulated as separate modules to build a better and even shallower model customized for each test sample. In particular, each layer from the pretrained model can be skipped/pruned or repeated multiple times as recurrent neural networks (RNN), and stacked with others in arbitrary orders, yielding a chain-of-layers (CoLa) per sample. This compositional space greatly expands the scope of existing works on looped/recurrent pretrained modules, layer pruning, or early-exit networks. We develop a Monte Carlo Tree Search (MCTS) protocol to explore and identify the optimal CoLa for each sample from math and commonsense reasoning benchmarks. Compared to a static model of a fixed depth, CoLa allows shortcut paths (fast thinking), recurrence of the same layer(s) (slow thinking), and combining both, offering more flexible, dynamic architectures for different inputs. We conduct an extensive analysis of the MCTS-optimized CoLa, which leads to two key findings: (1) For >75% of samples with correct predictions by the original LLM, we can find shorter CoLa, suggesting a large space for improving inference efficiency; (2) For >60% of samples with originally incorrect predictions, we can identify CoLa achieving correct predictions, suggesting a large space of performance enhancement. Our results highlight the shortcomings of using a fixed architecture of pre-trained LLMs for inference on different samples and pave the way to unlock the generalization power of test-time depth adaptation.
null
https://arxiv.org/abs/2507.07996v1
https://arxiv.org/pdf/2507.07996v1.pdf
null
[ "Ziyue Li", "Yang Li", "Tianyi Zhou" ]
[ "CoLA", "Large Language Model", "Math" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mixture-of-recursions-learning-dynamic
2507.10524
null
null
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation
Scaling language models unlocks impressive capabilities, but the accompanying computational and memory demands make both training and deployment expensive. Existing efficiency efforts typically target either parameter sharing or adaptive computation, leaving open the question of how to attain both simultaneously. We introduce Mixture-of-Recursions (MoR), a unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens. This allows MoR to focus quadratic attention computation only among tokens still active at a given recursion depth, further improving memory access efficiency by selectively caching only their key-value pairs. Beyond these core mechanisms, we also propose a KV sharing variant that reuses KV pairs from the first recursion, specifically designed to decrease prefill latency and memory footprint. Across model scales ranging from 135M to 1.7B parameters, MoR forms a new Pareto frontier: at equal training FLOPs and smaller model sizes, it significantly lowers validation perplexity and improves few-shot accuracy, while delivering higher throughput compared with vanilla and existing recursive baselines. These gains demonstrate that MoR is an effective path towards large-model quality without incurring large-model cost.
null
https://arxiv.org/abs/2507.10524v1
https://arxiv.org/pdf/2507.10524v1.pdf
null
[ "Sangmin Bae", "Yujin Kim", "Reza Bayat", "Sungnyun Kim", "Jiyoun Ha", "Tal Schuster", "Adam Fisch", "Hrayr Harutyunyan", "Ziwei Ji", "Aaron Courville", "Se-Young Yun" ]
[]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dual-dimensions-geometric-representation
2507.08492
null
null
Dual Dimensions Geometric Representation Learning Based Document Dewarping
Document image dewarping remains a challenging task in the deep learning era. While existing methods have improved by leveraging text line awareness, they typically focus only on a single horizontal dimension. In this paper, we propose a fine-grained deformation perception model that focuses on Dual Dimensions of document horizontal-vertical-lines to improve document Dewarping called D2Dewarp. It can perceive distortion trends in different directions across document details. To combine the horizontal and vertical granularity features, an effective fusion module based on X and Y coordinate is designed to facilitate interaction and constraint between the two dimensions for feature complementarity. Due to the lack of annotated line features in current public dewarping datasets, we also propose an automatic fine-grained annotation method using public document texture images and an automatic rendering engine to build a new large-scale distortion training dataset. The code and dataset will be publicly released. On public Chinese and English benchmarks, both quantitative and qualitative results show that our method achieves better rectification results compared with the state-of-the-art methods. The dataset will be publicly available at https://github.com/xiaomore/DocDewarpHV
null
https://arxiv.org/abs/2507.08492v2
https://arxiv.org/pdf/2507.08492v2.pdf
null
[ "Heng Li", "Qingcai Chen", "XiangPing Wu" ]
[ "Representation Learning" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/video-rts-rethinking-reinforcement-learning
2507.06485
null
null
Video-RTS: Rethinking Reinforcement Learning and Test-Time Scaling for Efficient and Enhanced Video Reasoning
Despite advances in reinforcement learning (RL)-based video reasoning with large language models (LLMs), data collection and finetuning remain significant challenges. These methods often rely on large-scale supervised fine-tuning (SFT) with extensive video data and long Chain-of-Thought (CoT) annotations, making them costly and hard to scale. To address this, we present Video-RTS, a new approach to improve video reasoning capability with drastically improved data efficiency by combining data-efficient RL with a video-adaptive test-time scaling (TTS) strategy. Based on observations about the data scaling of RL samples, we skip the resource-intensive SFT step and employ efficient pure-RL training with output-based rewards, requiring no additional annotations or extensive fine-tuning. Furthermore, to utilize computational resources more efficiently, we introduce a sparse-to-dense video TTS strategy that improves inference by iteratively adding frames based on output consistency. We validate our approach on multiple video reasoning benchmarks, showing that Video-RTS surpasses existing video reasoning models by an average of 2.4% in accuracy using only 3.6% training samples. For example, Video-RTS achieves a 4.2% improvement on Video-Holmes, a recent and challenging video reasoning benchmark, and a 2.6% improvement on MMVU. Notably, our pure RL training and adaptive video TTS offer complementary strengths, enabling Video-RTS's strong reasoning performance.
null
https://arxiv.org/abs/2507.06485v1
https://arxiv.org/pdf/2507.06485v1.pdf
null
[ "Ziyang Wang", "Jaehong Yoon", "Shoubin Yu", "Md Mohaiminul Islam", "Gedas Bertasius", "Mohit Bansal" ]
[ "Reinforcement Learning (RL)" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/voyaging-into-unbounded-dynamic-scenes-from-a
2507.04183
null
null
Voyaging into Unbounded Dynamic Scenes from a Single View
This paper studies the problem of generating an unbounded dynamic scene from a single view, which has wide applications in augmented/virtual reality and robotics. Since the scene is changing over time, different generated views need to be consistent with the underlying 3D motions. While previous works learn such consistency by training from multiple views, the generated scene regions are bounded to be close to the training views with limited camera movements. To address this issue, we propose DynamicVoyager that reformulates the dynamic scene generation as a scene outpainting process for new dynamic content. As 2D outpainting models can hardly generate 3D consistent motions from only 2D pixels at a single view, we consider pixels as rays to enrich the pixel input with the ray context, so that the 3D motion consistency can be learned from the ray information. More specifically, we first map the single-view video input to a dynamic point cloud with the estimated video depths. Then we render the partial video at a novel view and outpaint the video with ray contexts from the point cloud to generate 3D consistent motions. We employ the outpainted video to update the point cloud, which is used for scene outpainting from future novel views. Experiments show that our model is able to generate unbounded scenes with consistent motions along fly-through cameras, and the generated contents can be controlled with scene prompts.
null
https://arxiv.org/abs/2507.04183v1
https://arxiv.org/pdf/2507.04183v1.pdf
null
[ "Fengrui Tian", "Tianjiao Ding", "Jinqi Luo", "Hancheng Min", "René Vidal" ]
[ "Scene Generation" ]
2025-07-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/densereviewer-a-screening-prioritisation-tool
2502.03400
null
null
DenseReviewer: A Screening Prioritisation Tool for Systematic Review based on Dense Retrieval
Screening is a time-consuming and labour-intensive yet required task for medical systematic reviews, as tens of thousands of studies often need to be screened. Prioritising relevant studies to be screened allows downstream systematic review creation tasks to start earlier and save time. In previous work, we developed a dense retrieval method to prioritise relevant studies with reviewer feedback during the title and abstract screening stage. Our method outperforms previous active learning methods in both effectiveness and efficiency. In this demo, we extend this prior work by creating (1) a web-based screening tool that enables end-users to screen studies exploiting state-of-the-art methods and (2) a Python library that integrates models and feedback mechanisms and allows researchers to develop and demonstrate new active learning methods. We describe the tool's design and showcase how it can aid screening. The tool is available at https://densereviewer.ielab.io. The source code is also open sourced at https://github.com/ielab/densereviewer.
null
https://arxiv.org/abs/2502.03400v1
https://arxiv.org/pdf/2502.03400v1.pdf
null
[ "Xinyu Mao", "Teerapong Leelanupab", "Harrisen Scells", "Guido Zuccon" ]
[ "Active Learning" ]
2025-02-05T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/activation-steering-for-chain-of-thought
2507.04742
null
null
Activation Steering for Chain-of-Thought Compression
Large language models (LLMs) excel at complex reasoning when they include intermediate steps, known as "chains of thought" (CoTs). However, these rationales are often overly verbose, even for simple problems, leading to wasted context, increased latency, and higher energy consumption. We observe that verbose, English-heavy CoTs and concise, math-centric CoTs occupy distinct regions in the model's residual-stream activation space. By extracting and injecting a "steering vector" to transition between these modes, we can reliably shift generation toward more concise reasoning, effectively compressing CoTs without retraining. We formalize this approach as Activation-Steered Compression (ASC), an inference-time technique that shortens reasoning traces by directly modifying hidden representations. In addition, we provide a theoretical analysis of the impact of ASC on the output distribution, derived from a closed-form KL-divergence-bounded constraint to regulate steering strength. Using only 100 paired verbose and concise examples, ASC achieves up to 67.43% reduction in CoT length on MATH500 and GSM8K datasets, while maintaining accuracy across 7B, 8B, and 32B parameter models. As a training-free method, ASC introduces negligible runtime overhead and, on MATH500, delivers an average 2.73x speedup in end-to-end reasoning wall-clock time on an 8B model. This makes ASC a practical and efficient tool for streamlining the deployment of reasoning-capable LLMs in latency- or cost-sensitive settings. The code is available at: https://github.com/ArminAzizi98/ASC
null
https://arxiv.org/abs/2507.04742v2
https://arxiv.org/pdf/2507.04742v2.pdf
null
[ "Seyedarmin Azizi", "Erfan Baghaei Potraghloo", "Massoud Pedram" ]
[ "GSM8K", "Math" ]
2025-07-07T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/squeeze-the-soaked-sponge-efficient-off
2507.06892
null
null
Squeeze the Soaked Sponge: Efficient Off-policy Reinforcement Finetuning for Large Language Model
Reinforcement Learning (RL) has demonstrated its potential to improve the reasoning ability of Large Language Models (LLMs). One major limitation of most existing Reinforcement Finetuning (RFT) methods is that they are on-policy RL in nature, i.e., data generated during the past learning process is not fully utilized. This inevitably comes at a significant cost of compute and time, posing a stringent bottleneck on continuing economic and efficient scaling. To this end, we launch the renaissance of off-policy RL and propose Reincarnating Mix-policy Proximal Policy Gradient (ReMix), a general approach to enable on-policy RFT methods like PPO and GRPO to leverage off-policy data. ReMix consists of three major components: (1) Mix-policy proximal policy gradient with an increased Update-To-Data (UTD) ratio for efficient training; (2) KL-Convex policy constraint to balance the trade-off between stability and flexibility; (3) Policy reincarnation to achieve a seamless transition from efficient early-stage learning to steady asymptotic improvement. In our experiments, we train a series of ReMix models upon PPO, GRPO and 1.5B, 7B base models. ReMix shows an average Pass@1 accuracy of 52.10% (for 1.5B model) with 0.079M response rollouts, 350 training steps and achieves 63.27%/64.39% (for 7B model) with 0.007M/0.011M response rollouts, 50/75 training steps, on five math reasoning benchmarks (i.e., AIME'24, AMC'23, Minerva, OlympiadBench, and MATH500). Compared with 15 recent advanced models, ReMix shows SOTA-level performance with an over 30x to 450x reduction in training cost in terms of rollout data volume. In addition, we reveal insightful findings via multifaceted analysis, including the implicit preference for shorter responses due to the Whipping Effect of off-policy discrepancy, the collapse mode of self-reflection behavior under the presence of severe off-policyness, etc.
null
https://arxiv.org/abs/2507.06892v3
https://arxiv.org/pdf/2507.06892v3.pdf
null
[ "Jing Liang", "Hongyao Tang", "Yi Ma", "Jinyi Liu", "Yan Zheng", "Shuyue Hu", "Lei Bai", "Jianye Hao" ]
[ "Language Modeling", "Language Modelling", "Large Language Model", "Math", "Reinforcement Learning (RL)" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/2503-140121
2503.140121
null
null
null
null
null
https://arxiv.org/abs/2503.140121
null
null
[]
[]
null
null
null
null
null
[]
https://paperswithcode.com/paper/movies-motion-aware-4d-dynamic-view-synthesis
2507.10065
null
null
MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second
We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.
null
https://arxiv.org/abs/2507.10065v1
https://arxiv.org/pdf/2507.10065v1.pdf
null
[ "Chenguo Lin", "YuChen Lin", "Panwang Pan", "Yifan Yu", "Honglei Yan", "Katerina Fragkiadaki", "Yadong Mu" ]
[ "Novel View Synthesis", "Point Tracking", "Scene Flow Estimation", "Semantic Segmentation" ]
2025-07-14T00:00:00
null
null
null
null
[]