paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
float64
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/exploiting-leaderboards-for-large-scale
2507.08983
null
null
Exploiting Leaderboards for Large-Scale Distribution of Malicious Models
While poisoning attacks on machine learning models have been extensively studied, the mechanisms by which adversaries can distribute poisoned models at scale remain largely unexplored. In this paper, we shed light on how model leaderboards -- ranked platforms for model discovery and evaluation -- can serve as a powerful channel for adversaries for stealthy large-scale distribution of poisoned models. We present TrojanClimb, a general framework that enables injection of malicious behaviors while maintaining competitive leaderboard performance. We demonstrate its effectiveness across four diverse modalities: text-embedding, text-generation, text-to-speech and text-to-image, showing that adversaries can successfully achieve high leaderboard rankings while embedding arbitrary harmful functionalities, from backdoors to bias injection. Our findings reveal a significant vulnerability in the machine learning ecosystem, highlighting the urgent need to redesign leaderboard evaluation mechanisms to detect and filter malicious (e.g., poisoned) models, while exposing broader security implications for the machine learning community regarding the risks of adopting models from unverified sources.
null
https://arxiv.org/abs/2507.08983v1
https://arxiv.org/pdf/2507.08983v1.pdf
null
[ "Anshuman Suri", "Harsh Chaudhari", "Yuefeng Peng", "Ali Naseh", "Amir Houmansadr", "Alina Oprea" ]
[ "Model Discovery", "Text Generation", "text-to-speech", "Text to Speech" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/vip-visual-information-protection-through
2507.08982
null
null
VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models
Recent years have witnessed remarkable progress in developing Vision-Language Models (VLMs) capable of processing both textual and visual inputs. These models have demonstrated impressive performance, leading to their widespread adoption in various applications. However, this widespread raises serious concerns regarding user privacy, particularly when models inadvertently process or expose private visual information. In this work, we frame the preservation of privacy in VLMs as an adversarial attack problem. We propose a novel attack strategy that selectively conceals information within designated Region Of Interests (ROIs) in an image, effectively preventing VLMs from accessing sensitive content while preserving the semantic integrity of the remaining image. Unlike conventional adversarial attacks that often disrupt the entire image, our method maintains high coherence in unmasked areas. Experimental results across three state-of-the-art VLMs namely LLaVA, Instruct-BLIP, and BLIP2-T5 demonstrate up to 98% reduction in detecting targeted ROIs, while maintaining global image semantics intact, as confirmed by high similarity scores between clean and adversarial outputs. We believe that this work contributes to a more privacy conscious use of multimodal models and offers a practical tool for further research, with the source code publicly available at: https://github.com/hbrachemi/Vlm_defense-attack.
null
https://arxiv.org/abs/2507.08982v1
https://arxiv.org/pdf/2507.08982v1.pdf
null
[ "Hanene F. Z. Brachemi Meftah", "Wassim Hamidouche", "Sid Ahmed Fezza", "Olivier Déforges" ]
[ "Adversarial Attack" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/when-and-where-do-data-poisons-attack-textual
2507.10578
null
null
When and Where do Data Poisons Attack Textual Inversion?
Poisoning attacks pose significant challenges to the robustness of diffusion models (DMs). In this paper, we systematically analyze when and where poisoning attacks textual inversion (TI), a widely used personalization technique for DMs. We first introduce Semantic Sensitivity Maps, a novel method for visualizing the influence of poisoning on text embeddings. Second, we identify and experimentally verify that DMs exhibit non-uniform learning behavior across timesteps, focusing on lower-noise samples. Poisoning attacks inherit this bias and inject adversarial signals predominantly at lower timesteps. Lastly, we observe that adversarial signals distract learning away from relevant concept regions within training data, corrupting the TI process. Based on these insights, we propose Safe-Zone Training (SZT), a novel defense mechanism comprised of 3 key components: (1) JPEG compression to weaken high-frequency poison signals, (2) restriction to high timesteps during TI training to avoid adversarial signals at lower timesteps, and (3) loss masking to constrain learning to relevant regions. Extensive experiments across multiple poisoning methods demonstrate that SZT greatly enhances the robustness of TI against all poisoning attacks, improving generative quality beyond prior published defenses. Code: www.github.com/JStyborski/Diff_Lab Data: www.github.com/JStyborski/NC10
null
https://arxiv.org/abs/2507.10578v2
https://arxiv.org/pdf/2507.10578v2.pdf
null
[ "Jeremy Styborski", "Mingzhi Lyu", "Jiayou Lu", "Nupur Kapur", "Adams Kong" ]
[]
2025-07-11T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/the-dark-side-of-llms-agent-based-attacks-for
2507.06850
null
null
The Dark Side of LLMs Agent-based Attacks for Complete Computer Takeover
The rapid adoption of Large Language Model (LLM) agents and multi-agent systems enables unprecedented capabilities in natural language processing and generation. However, these systems have introduced unprecedented security vulnerabilities that extend beyond traditional prompt injection attacks. This paper presents the first comprehensive evaluation of LLM agents as attack vectors capable of achieving complete computer takeover through the exploitation of trust boundaries within agentic AI systems where autonomous entities interact and influence each other. We demonstrate that adversaries can leverage three distinct attack surfaces - direct prompt injection, RAG backdoor attacks, and inter-agent trust exploitation - to coerce popular LLMs (including GPT-4o, Claude-4 and Gemini-2.5) into autonomously installing and executing malware on victim machines. Our evaluation of 17 state-of-the-art LLMs reveals an alarming vulnerability hierarchy: while 41.2% of models succumb to direct prompt injection, 52.9% are vulnerable to RAG backdoor attacks, and a critical 82.4% can be compromised through inter-agent trust exploitation. Notably, we discovered that LLMs which successfully resist direct malicious commands will execute identical payloads when requested by peer agents, revealing a fundamental flaw in current multi-agent security models. Our findings demonstrate that only 5.9% of tested models (1/17) proved resistant to all attack vectors, with the majority exhibiting context-dependent security behaviors that create exploitable blind spots. Our findings also highlight the need to increase awareness and research on the security risks of LLMs, showing a paradigm shift in cybersecurity threats, where AI tools themselves become sophisticated attack vectors.
null
https://arxiv.org/abs/2507.06850v3
https://arxiv.org/pdf/2507.06850v3.pdf
null
[ "Matteo Lupinacci", "Francesco Aurelio Pironti", "Francesco Blefari", "Francesco Romeo", "Luigi Arena", "Angelo Furfaro" ]
[ "Large Language Model", "RAG" ]
2025-07-09T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "https://github.com/google-research/bert", "description": "**BERT**, or Bidirectional Encoder Representations from Transformers, improves upon standard [Transformers](http://paperswithcode.com/method/transformer) by removing the unidirectionality constraint by using a *masked language model* (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context. Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pre-train a deep bidirectional Transformer. In addition to the masked language model, BERT uses a *next sentence prediction* task that jointly pre-trains text-pair representations. \r\n\r\nThere are two steps in BERT: *pre-training* and *fine-tuning*. During pre-training, the model is trained on unlabeled data over different pre-training tasks. For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks. Each downstream task has separate fine-tuned models, even though they\r\nare initialized with the same pre-trained parameters.", "full_name": "BERT", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "BERT", "source_title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "source_url": "https://arxiv.org/abs/1810.04805v2" }, { "code_snippet_url": null, "description": "**BART** is a [denoising autoencoder](https://paperswithcode.com/method/denoising-autoencoder) for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard [Transformer](https://paperswithcode.com/method/transformer)-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like [BERT](https://paperswithcode.com/method/bert)) and a left-to-right decoder (like [GPT](https://paperswithcode.com/method/gpt)). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like [GPT2](https://paperswithcode.com/method/gpt-2).", "full_name": "BART", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "BART", "source_title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "source_url": "https://arxiv.org/abs/1910.13461v1" }, { "code_snippet_url": "", "description": "**Retriever-Augmented Generation**, or **RAG**, is a type of language generation model that combines pre-trained parametric and non-parametric memory for language generation. Specifically, the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. For query $x$, Maximum Inner Product Search (MIPS) is used to find the top-K documents $z\\_{i}$. For final prediction $y$, we treat $z$ as a latent variable and marginalize over seq2seq predictions given different documents.", "full_name": "RAG", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "RAG", "source_title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "source_url": "https://arxiv.org/abs/2005.11401v4" } ]
https://paperswithcode.com/paper/towards-imperceptible-jpeg-image-hiding-multi
2507.08343
null
null
Towards Imperceptible JPEG Image Hiding: Multi-range Representations-driven Adversarial Stego Generation
Deep hiding has been exploring the hiding capability of deep learning-based models, aiming to conceal image-level messages into cover images and reveal them from generated stego images. Existing schemes are easily detected by steganalyzers due to their large payloads and their limitation to feature extraction based solely on either pure convolution or pure transformer operators within a single range, as well as pixel-level loss constraints. To address the issue, in this paper, we introduce generation-based adversarial attacks into color JPEG image deep hiding and propose a multi-range representations-driven adversarial stego generation framework called MRAG from a steganalysis perspective. Specifically, we integrate the local-range neighbor reception characteristic of the convolution and the global-range dependency modeling of the transformer to construct MRAG. Meanwhile, we use the transformed images obtained through coarse-grained and fine-grained frequency decomposition as inputs, introducing multi-grained information. Furthermore, a features angle-norm disentanglement loss is designed to constrain the generated stegos closer to covers in the angle and norm space of the steganalyzer's classified features. Consequently, small yet effective adversarial perturbations can be injected into the process of generating stegos, ensuring that stegos maintain favorable secret restorability and imperceptibility. Extensive experiments demonstrate that MRAG can achieve state-of-the-art performance.
null
https://arxiv.org/abs/2507.08343v1
https://arxiv.org/pdf/2507.08343v1.pdf
null
[ "Junxue Yang", "Xin Liao", "Weixuan Tang", "Jianhua Yang", "Zheng Qin" ]
[ "Disentanglement", "Steganalysis" ]
2025-07-11T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/lightweight-safety-guardrails-via-synthetic
2507.08284
null
null
Lightweight Safety Guardrails via Synthetic Data and RL-guided Adversarial Training
We introduce a lightweight yet highly effective safety guardrail framework for language models, demonstrating that small-scale language models can achieve, and even surpass, the performance of larger counterparts in content moderation tasks. This is accomplished through high-fidelity synthetic data generation and adversarial training. The synthetic data generation process begins with human-curated seed data, which undergoes query augmentation and paraphrasing to create diverse and contextually rich examples. This augmented data is then subjected to multiple rounds of curation, ensuring high fidelity and relevance. Inspired by recent advances in the Generative Adversarial Network (GAN) architecture, our adversarial training employs reinforcement learning to guide a generator that produces challenging synthetic examples. These examples are used to fine-tune the safety classifier, enhancing its ability to detect and mitigate harmful content. Additionally, we incorporate strategies from recent research on efficient LLM training, leveraging the capabilities of smaller models to improve the performance of larger generative models. With iterative adversarial training and the generation of diverse, high-quality synthetic data, our framework enables small language models (SLMs) to serve as robust safety guardrails. This approach not only reduces computational overhead but also enhances resilience against adversarial attacks, offering a scalable and efficient solution for content moderation in AI systems.
null
https://arxiv.org/abs/2507.08284v1
https://arxiv.org/pdf/2507.08284v1.pdf
null
[ "Aleksei Ilin", "Gor Matevosyan", "Xueying Ma", "Vladimir Eremin", "Suhaa Dada", "Muqun Li", "Riyaaz Shaik", "Haluk Noyan Tokgozoglu" ]
[ "Generative Adversarial Network", "Synthetic Data Generation" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/admissibility-of-stein-shrinkage-for-batch
2507.08261
null
null
Admissibility of Stein Shrinkage for Batch Normalization in the Presence of Adversarial Attacks
Batch normalization (BN) is a ubiquitous operation in deep neural networks used primarily to achieve stability and regularization during network training. BN involves feature map centering and scaling using sample means and variances, respectively. Since these statistics are being estimated across the feature maps within a batch, this problem is ideally suited for the application of Stein's shrinkage estimation, which leads to a better, in the mean-squared-error sense, estimate of the mean and variance of the batch. In this paper, we prove that the Stein shrinkage estimator for the mean and variance dominates over the sample mean and variance estimators in the presence of adversarial attacks when modeling these attacks using sub-Gaussian distributions. This facilitates and justifies the application of Stein shrinkage to estimate the mean and variance parameters in BN and use it in image classification (segmentation) tasks with and without adversarial attacks. We present SOTA performance results using this Stein corrected batch norm in a standard ResNet architecture applied to the task of image classification using CIFAR-10 data, 3D CNN on PPMI (neuroimaging) data and image segmentation using HRNet on Cityscape data with and without adversarial attacks.
null
https://arxiv.org/abs/2507.08261v1
https://arxiv.org/pdf/2507.08261v1.pdf
null
[ "Sofia Ivolgina", "P. Thomas Fletcher", "Baba C. Vemuri" ]
[ "image-classification", "Image Classification", "Image Segmentation", "Semantic Segmentation" ]
2025-07-11T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "3 Dimensional Convolutional Neural Network", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "3D CNN", "source_title": "Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction", "source_url": "https://arxiv.org/abs/2007.13224v1" }, { "code_snippet_url": "https://github.com/HRNet/HRNet-Image-Classification/blob/8f158719e821836e21e6cba99a3241a12a13bc41/lib/models/cls_hrnet.py#L254", "description": "**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution [convolution](https://paperswithcode.com/method/convolution) stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and\r\nthe $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.", "full_name": "HRNet", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "HRNet", "source_title": "Deep High-Resolution Representation Learning for Visual Recognition", "source_url": "https://arxiv.org/abs/1908.07919v2" } ]
https://paperswithcode.com/paper/a-dynamic-stackelberg-game-framework-for
2507.08207
null
null
A Dynamic Stackelberg Game Framework for Agentic AI Defense Against LLM Jailbreaking
As large language models (LLMs) are increasingly deployed in critical applications, the challenge of jailbreaking, where adversaries manipulate the models to bypass safety mechanisms, has become a significant concern. This paper presents a dynamic Stackelberg game framework to model the interactions between attackers and defenders in the context of LLM jailbreaking. The framework treats the prompt-response dynamics as a sequential extensive-form game, where the defender, as the leader, commits to a strategy while anticipating the attacker's optimal responses. We propose a novel agentic AI solution, the "Purple Agent," which integrates adversarial exploration and defensive strategies using Rapidly-exploring Random Trees (RRT). The Purple Agent actively simulates potential attack trajectories and intervenes proactively to prevent harmful outputs. This approach offers a principled method for analyzing adversarial dynamics and provides a foundation for mitigating the risk of jailbreaking.
null
https://arxiv.org/abs/2507.08207v1
https://arxiv.org/pdf/2507.08207v1.pdf
null
[ "Zhengye Han", "Quanyan Zhu" ]
[]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/identifying-the-smallest-adversarial-load
2507.07850
null
null
Identifying the Smallest Adversarial Load Perturbations that Render DC-OPF Infeasible
What is the globally smallest load perturbation that renders DC-OPF infeasible? Reliably identifying such "adversarial attack" perturbations has useful applications in a variety of emerging grid-related contexts, including machine learning performance verification, cybersecurity, and operational robustness of power systems dominated by stochastic renewable energy resources. In this paper, we formulate the inherently nonconvex adversarial attack problem by applying a parameterized version of Farkas' lemma to a perturbed set of DC-OPF equations. Since the resulting formulation is very hard to globally optimize, we also propose a parameterized generation control policy which, when applied to the primal DC-OPF problem, provides solvability guarantees. Together, these nonconvex problems provide guaranteed upper and lower bounds on adversarial attack size; by combining them into a single optimization problem, we can efficiently "squeeze" these bounds towards a common global solution. We apply these methods on a range of small- to medium-sized test cases from PGLib, benchmarking our results against the best adversarial attack lower bounds provided by Gurobi 12.0's spatial Branch and Bound solver.
null
https://arxiv.org/abs/2507.07850v1
https://arxiv.org/pdf/2507.07850v1.pdf
null
[ "Samuel Chevalier", "William A. Wheeler" ]
[ "Adversarial Attack", "Benchmarking", "LEMMA" ]
2025-07-10T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/recursive-bound-constrained-adagrad-with
2507.11513
null
null
Recursive Bound-Constrained AdaGrad with Applications to Multilevel and Domain Decomposition Minimization
Two OFFO (Objective-Function Free Optimization) noise tolerant algorithms are presented that handle bound constraints, inexact gradients and use second-order information when available.The first is a multi-level method exploiting a hierarchical description of the problem and the second is a domain-decomposition method covering the standard addditive Schwarz decompositions. Both are generalizations of the first-order AdaGrad algorithm for unconstrained optimization. Because these algorithms share a common theoretical framework, a single convergence/complexity theory is provided which covers them both. Its main result is that, with high probability, both methods need at most $O(\epsilon^{-2})$ iterations and noisy gradient evaluations to compute an $\epsilon$-approximate first-order critical point of the bound-constrained problem. Extensive numerical experiments are discussed on applications ranging from PDE-based problems to deep neural network training, illustrating their remarkable computational efficiency.
null
https://arxiv.org/abs/2507.11513v1
https://arxiv.org/pdf/2507.11513v1.pdf
null
[ "Serge Gratton", "Alena Kopaničáková", "Philippe Toint" ]
[ "Computational Efficiency" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/Dawn-Of-Eve/nadir/blob/main/src/nadir/adagrad.py", "description": "**AdaGrad** is a stochastic optimization method that adapts the learning rate to the parameters. It performs smaller updates for parameters associated with frequently occurring features, and larger updates for parameters associated with infrequently occurring features. In its update rule, Adagrad modifies the general learning rate $\\eta$ at each time step $t$ for every parameter $\\theta\\_{i}$ based on the past gradients for $\\theta\\_{i}$: \r\n\r\n$$ \\theta\\_{t+1, i} = \\theta\\_{t, i} - \\frac{\\eta}{\\sqrt{G\\_{t, ii} + \\epsilon}}g\\_{t, i} $$\r\n\r\nThe benefit of AdaGrad is that it eliminates the need to manually tune the learning rate; most leave it at a default value of $0.01$. Its main weakness is the accumulation of the squared gradients in the denominator. Since every added term is positive, the accumulated sum keeps growing during training, causing the learning rate to shrink and becoming infinitesimally small.\r\n\r\nImage: [Alec Radford](https://twitter.com/alecrad)", "full_name": "AdaGrad", "introduced_year": 2011, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "AdaGrad", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/illuminating-the-three-dogmas-of
2507.11482
null
null
Illuminating the Three Dogmas of Reinforcement Learning under Evolutionary Light
Three core tenets of reinforcement learning (RL)--concerning the definition of agency, the objective of learning, and the scope of the reward hypothesis--have been highlighted as key targets for conceptual revision, with major implications for theory and application. We propose a framework, inspired by open-ended evolutionary theory, to reconsider these three "dogmas." We revisit each assumption and address related concerns raised alongside them. To make our arguments relevant to RL as a model of biological learning, we first establish that evolutionary dynamics can plausibly operate within living brains over an individual's lifetime, and are not confined to cross-generational processes. We begin by revisiting the second dogma, drawing on evolutionary insights to enrich the "adaptation-rather-than-search" view of learning. We then address the third dogma regarding the limits of the reward hypothesis, using analogies from evolutionary fitness to illuminate the scalar reward vs. multi-objective debate. After discussing practical implications for exploration in RL, we turn to the first--and arguably most fundamental--issue: the absence of a formal account of agency. We argue that unlike the other two problems, the evolutionary paradigm alone cannot resolve the agency question, though it gestures in a productive direction. We advocate integrating ideas from origins-of-life theory, where the thermodynamics of sustenance and replication offer promising foundations for understanding agency and resource-constrained reinforcement learning in biological systems.
null
https://arxiv.org/abs/2507.11482v1
https://arxiv.org/pdf/2507.11482v1.pdf
null
[ "Mani Hamidi", "Terrence W. Deacon" ]
[ "Reinforcement Learning (RL)" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/coli-a-hierarchical-efficient-compressor-for
2507.11443
null
null
COLI: A Hierarchical Efficient Compressor for Large Images
The escalating adoption of high-resolution, large-field-of-view imagery amplifies the need for efficient compression methodologies. Conventional techniques frequently fail to preserve critical image details, while data-driven approaches exhibit limited generalizability. Implicit Neural Representations (INRs) present a promising alternative by learning continuous mappings from spatial coordinates to pixel intensities for individual images, thereby storing network weights rather than raw pixels and avoiding the generalization problem. However, INR-based compression of large images faces challenges including slow compression speed and suboptimal compression ratios. To address these limitations, we introduce COLI (Compressor for Large Images), a novel framework leveraging Neural Representations for Videos (NeRV). First, recognizing that INR-based compression constitutes a training process, we accelerate its convergence through a pretraining-finetuning paradigm, mixed-precision training, and reformulation of the sequential loss into a parallelizable objective. Second, capitalizing on INRs' transformation of image storage constraints into weight storage, we implement Hyper-Compression, a novel post-training technique to substantially enhance compression ratios while maintaining minimal output distortion. Evaluations across two medical imaging datasets demonstrate that COLI consistently achieves competitive or superior PSNR and SSIM metrics at significantly reduced bits per pixel (bpp), while accelerating NeRV training by up to 4 times.
null
https://arxiv.org/abs/2507.11443v1
https://arxiv.org/pdf/2507.11443v1.pdf
null
[ "Haoran Wang", "Hanyu Pei", "Yang Lyu", "Kai Zhang", "Li Li", "Feng-Lei Fan" ]
[ "SSIM" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/towards-reliable-objective-evaluation-metrics
2507.11427
null
null
Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation Models
Traditional Blind Source Separation Evaluation (BSS-Eval) metrics were originally designed to evaluate linear audio source separation models based on methods such as time-frequency masking. However, recent generative models may introduce nonlinear relationships between the separated and reference signals, limiting the reliability of these metrics for objective evaluation. To address this issue, we conduct a Degradation Category Rating listening test and analyze correlations between the obtained degradation mean opinion scores (DMOS) and a set of objective audio quality metrics for the task of singing voice separation. We evaluate three state-of-the-art discriminative models and two new competitive generative models. For both discriminative and generative models, intrusive embedding-based metrics show higher correlations with DMOS than conventional intrusive metrics such as BSS-Eval. For discriminative models, the highest correlation is achieved by the MSE computed on Music2Latent embeddings. When it comes to the evaluation of generative models, the strongest correlations are evident for the multi-resolution STFT loss and the MSE calculated on MERT-L12 embeddings, with the latter also providing the most balanced correlation across both model types. Our results highlight the limitations of BSS-Eval metrics for evaluating generative singing voice separation models and emphasize the need for careful selection and validation of alternative evaluation metrics for the task of singing voice separation.
null
https://arxiv.org/abs/2507.11427v1
https://arxiv.org/pdf/2507.11427v1.pdf
null
[ "Paul A. Bereuter", "Benjamin Stahl", "Mark D. Plumbley", "Alois Sontacchi" ]
[ "Audio Source Separation", "blind source separation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/seq-vs-seq-an-open-suite-of-paired-encoders
2507.11412
null
null
Seq vs Seq: An Open Suite of Paired Encoders and Decoders
The large language model (LLM) community focuses almost exclusively on decoder-only language models, since they are easier to use for text generation. However, a large subset of the community still uses encoder-only models for tasks such as classification or retrieval. Previous work has attempted to compare these architectures, but is forced to make comparisons with models that have different numbers of parameters, training techniques, and datasets. We introduce the SOTA open-data Ettin suite of models: paired encoder-only and decoder-only models ranging from 17 million parameters to 1 billion, trained on up to 2 trillion tokens. Using the same recipe for both encoder-only and decoder-only models produces SOTA recipes in both categories for their respective sizes, beating ModernBERT as an encoder and Llama 3.2 and SmolLM2 as decoders. Like previous work, we find that encoder-only models excel at classification and retrieval tasks while decoders excel at generative tasks. However, we show that adapting a decoder model to encoder tasks (and vice versa) through continued training is subpar compared to using only the reverse objective (i.e. a 400M encoder outperforms a 1B decoder on MNLI, and vice versa for generative tasks). We open-source all artifacts of this study including training data, training order segmented by checkpoint, and 200+ checkpoints to allow future work to analyze or extend all aspects of training.
null
https://arxiv.org/abs/2507.11412v1
https://arxiv.org/pdf/2507.11412v1.pdf
null
[ "Orion Weller", "Kathryn Ricci", "Marc Marone", "Antoine Chaffin", "Dawn Lawrie", "Benjamin Van Durme" ]
[ "Decoder", "Large Language Model", "Retrieval", "Text Generation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**LLaMA** is a collection of foundation language models ranging from 7B to 65B parameters. It is based on the transformer architecture with various improvements that were subsequently proposed. The main difference with the original architecture are listed below.\r\n\r\n- RMSNorm normalizing function is used to improve the training stability, by normalizing the input of each transformer sub-layer, instead of normalizing the output.\r\n- The ReLU non-linearity is replaced by the SwiGLU activation function to improve performance.\r\n- Absolute positional embeddings are removed and instead rotary positional embeddings (RoPE) are added at each layer of the network.", "full_name": "LLaMA", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Language Models** are models for predicting the next word or character in a document. Below you can find a continuously updating list of language models.\r\n\r\n", "name": "Language Models", "parent": null }, "name": "LLaMA", "source_title": "LLaMA: Open and Efficient Foundation Language Models", "source_url": "https://arxiv.org/abs/2302.13971v1" } ]
https://paperswithcode.com/paper/inverse-optimal-control-with-constraint
2507.11392
null
null
Inverse Optimal Control with Constraint Relaxation
Inverse optimal control (IOC) is a promising paradigm for learning and mimicking optimal control strategies from capable demonstrators, or gaining a deeper understanding of their intentions, by estimating an unknown objective function from one or more corresponding optimal control sequences. When computing estimates from demonstrations in environments with safety-preserving inequality constraints, acknowledging their presence in the chosen IOC method is crucial given their strong influence on the final control strategy. However, solution strategies capable of considering inequality constraints, such as the inverse Karush-Kuhn-Tucker approach, rely on their correct activation and fulfillment; a restrictive assumption when dealing with noisy demonstrations. To overcome this problem, we leverage the concept of exact penalty functions for IOC and show preservation of estimation accuracy. Considering noisy demonstrations, we then illustrate how the usage of penalty functions reduces the number of unknown variables and how their approximations enhance the estimation method's capacity to account for wrong constraint activations within a polytopic-constrained environment. The proposed method is evaluated for three systems in simulation, outperforming traditional relaxation approaches for noisy demonstrations.
null
https://arxiv.org/abs/2507.11392v1
https://arxiv.org/pdf/2507.11392v1.pdf
null
[ "Rahel Rickenbach", "Amon Lahr", "Melanie N. Zeilinger" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sparse-regression-codes-exploit-multi-user
2507.11383
null
null
Sparse Regression Codes exploit Multi-User Diversity without CSI
We study sparse regression codes (SPARC) for multiple access channels with multiple receive antennas, in non-coherent flat fading channels. We propose a novel practical decoder, referred to as maximum likelihood matching pursuit (MLMP), which greedily finds the support of the codewords of users with partial maximum likelihood metrics. As opposed to the conventional successive-cancellation based greedy algorithms, MLMP works as a successive-combining energy detector. We also propose MLMP modifications to improve the performance at high code rates. Our studies in short block lengths show that, even without any channel state information, SPARC with MLMP decoder achieves multi-user diversity in some scenarios, giving better error performance with multiple users than that of the corresponding single-user case. We also show that SPARC with MLMP performs better than conventional sparse recovery algorithms and pilot-aided transmissions with polar codes.
null
https://arxiv.org/abs/2507.11383v1
https://arxiv.org/pdf/2507.11383v1.pdf
null
[ "V S V Sandeep", "Sai Dinesh Kancharana", "Arun Pachai Kannu" ]
[ "Decoder", "Diversity", "regression" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/acting-and-planning-with-hierarchical
2507.11345
null
null
Acting and Planning with Hierarchical Operational Models on a Mobile Robot: A Study with RAE+UPOM
Robotic task execution faces challenges due to the inconsistency between symbolic planner models and the rich control structures actually running on the robot. In this paper, we present the first physical deployment of an integrated actor-planner system that shares hierarchical operational models for both acting and planning, interleaving the Reactive Acting Engine (RAE) with an anytime UCT-like Monte Carlo planner (UPOM). We implement RAE+UPOM on a mobile manipulator in a real-world deployment for an object collection task. Our experiments demonstrate robust task execution under action failures and sensor noise, and provide empirical insights into the interleaved acting-and-planning decision making process.
null
https://arxiv.org/abs/2507.11345v1
https://arxiv.org/pdf/2507.11345v1.pdf
null
[ "Oscar Lima", "Marc Vinci", "Sunandita Patra", "Sebastian Stock", "Joachim Hertzberg", "Martin Atzmueller", "Malik Ghallab", "Dana Nau", "Paolo Traverso" ]
[ "Decision Making" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cogddn-a-cognitive-demand-driven-navigation
2507.11334
null
null
CogDDN: A Cognitive Demand-Driven Navigation with Decision Optimization and Dual-Process Thinking
Mobile robots are increasingly required to navigate and interact within unknown and unstructured environments to meet human demands. Demand-driven navigation (DDN) enables robots to identify and locate objects based on implicit human intent, even when object locations are unknown. However, traditional data-driven DDN methods rely on pre-collected data for model training and decision-making, limiting their generalization capability in unseen scenarios. In this paper, we propose CogDDN, a VLM-based framework that emulates the human cognitive and learning mechanisms by integrating fast and slow thinking systems and selectively identifying key objects essential to fulfilling user demands. CogDDN identifies appropriate target objects by semantically aligning detected objects with the given instructions. Furthermore, it incorporates a dual-process decision-making module, comprising a Heuristic Process for rapid, efficient decisions and an Analytic Process that analyzes past errors, accumulates them in a knowledge base, and continuously improves performance. Chain of Thought (CoT) reasoning strengthens the decision-making process. Extensive closed-loop evaluations on the AI2Thor simulator with the ProcThor dataset show that CogDDN outperforms single-view camera-only methods by 15%, demonstrating significant improvements in navigation accuracy and adaptability. The project page is available at https://yuehaohuang.github.io/CogDDN/.
null
https://arxiv.org/abs/2507.11334v1
https://arxiv.org/pdf/2507.11334v1.pdf
null
[ "Yuehao Huang", "Liang Liu", "Shuangming Lei", "Yukai Ma", "Hao Su", "Jianbiao Mei", "Pengxiang Zhao", "Yaqing Gu", "Yong liu", "Jiajun Lv" ]
[ "Decision Making", "Navigate" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-mixed-primitive-based-gaussian-splatting
2507.11321
null
null
A Mixed-Primitive-based Gaussian Splatting Method for Surface Reconstruction
Recently, Gaussian Splatting (GS) has received a lot of attention in surface reconstruction. However, while 3D objects can be of complex and diverse shapes in the real world, existing GS-based methods only limitedly use a single type of splatting primitive (Gaussian ellipse or Gaussian ellipsoid) to represent object surfaces during their reconstruction. In this paper, we highlight that this can be insufficient for object surfaces to be represented in high quality. Thus, we propose a novel framework that, for the first time, enables Gaussian Splatting to incorporate multiple types of (geometrical) primitives during its surface reconstruction process. Specifically, in our framework, we first propose a compositional splatting strategy, enabling the splatting and rendering of different types of primitives in the Gaussian Splatting pipeline. In addition, we also design our framework with a mixed-primitive-based initialization strategy and a vertex pruning mechanism to further promote its surface representation learning process to be well executed leveraging different types of primitives. Extensive experiments show the efficacy of our framework and its accurate surface reconstruction performance.
null
https://arxiv.org/abs/2507.11321v1
https://arxiv.org/pdf/2507.11321v1.pdf
null
[ "Haoxuan Qu", "Yujun Cai", "Hossein Rahmani", "Ajay Kumar", "Junsong Yuan", "Jun Liu" ]
[ "Representation Learning", "Surface Reconstruction" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Pruning", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Model Compression", "parent": null }, "name": "Pruning", "source_title": "Pruning Filters for Efficient ConvNets", "source_url": "http://arxiv.org/abs/1608.08710v3" } ]
https://paperswithcode.com/paper/p-808-multilingual-speech-enhancement-testing
2507.11306
null
null
P.808 Multilingual Speech Enhancement Testing: Approach and Results of URGENT 2025 Challenge
In speech quality estimation for speech enhancement (SE) systems, subjective listening tests so far are considered as the gold standard. This should be even more true considering the large influx of new generative or hybrid methods into the field, revealing issues of some objective metrics. Efforts such as the Interspeech 2025 URGENT Speech Enhancement Challenge also involving non-English datasets add the aspect of multilinguality to the testing procedure. In this paper, we provide a brief recap of the ITU-T P.808 crowdsourced subjective listening test method. A first novel contribution is our proposed process of localizing both text and audio components of Naderi and Cutler's implementation of crowdsourced subjective absolute category rating (ACR) listening tests involving text-to-speech (TTS). Further, we provide surprising analyses of and insights into URGENT Challenge results, tackling the reliability of (P.808) ACR subjective testing as gold standard in the age of generative AI. Particularly, it seems that for generative SE methods, subjective (ACR MOS) and objective (DNSMOS, NISQA) reference-free metrics should be accompanied by objective phone fidelity metrics to reliably detect hallucinations. Finally, in the accepted version, we will release our localization scripts and methods for easy deployment for new multilingual speech enhancement subjective evaluations according to ITU-T P.808.
null
https://arxiv.org/abs/2507.11306v1
https://arxiv.org/pdf/2507.11306v1.pdf
null
[ "Marvin Sach", "Yihui Fu", "Kohei Saijo", "Wangyou Zhang", "Samuele Cornell", "Robin Scheibler", "Chenda Li", "Anurag Kumar", "Wei Wang", "Yanmin Qian", "Shinji Watanabe", "Tim Fingscheidt" ]
[ "Speech Enhancement", "text-to-speech", "Text to Speech" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/opus-a-prompt-intention-framework-for-complex
2507.11288
null
null
Opus: A Prompt Intention Framework for Complex Workflow Generation
This paper introduces the Opus Prompt Intention Framework, designed to improve complex Workflow Generation with instruction-tuned Large Language Models (LLMs). We propose an intermediate Intention Capture layer between user queries and Workflow Generation, implementing the Opus Workflow Intention Framework, which consists of extracting Workflow Signals from user queries, interpreting them into structured Workflow Intention objects, and generating Workflows based on these Intentions. Our results show that this layer enables LLMs to produce logical and meaningful outputs that scale reliably as query complexity increases. On a synthetic benchmark of 1,000 multi-intent query-Workflow(s) pairs, applying the Opus Prompt Intention Framework to Workflow Generation yields consistent improvements in semantic Workflow similarity metrics. In this paper, we introduce the Opus Prompt Intention Framework by applying the concepts of Workflow Signal and Workflow Intention to LLM-driven Workflow Generation. We present a reproducible, customizable LLM-based Intention Capture system to extract Workflow Signals and Workflow Intentions from user queries. Finally, we provide empirical evidence that the proposed system significantly improves Workflow Generation quality compared to direct generation from user queries, particularly in cases of Mixed Intention Elicitation.
null
https://arxiv.org/abs/2507.11288v1
https://arxiv.org/pdf/2507.11288v1.pdf
null
[ "Théo Fagnoni", "Mahsun Altın", "Chia En Chung", "Phillip Kingston", "Alan Tuning", "Dana O. Mohamed", "Inès Adnani" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/task-oriented-human-grasp-synthesis-via
2507.11287
null
null
Task-Oriented Human Grasp Synthesis via Context- and Task-Aware Diffusers
In this paper, we study task-oriented human grasp synthesis, a new grasp synthesis task that demands both task and context awareness. At the core of our method is the task-aware contact maps. Unlike traditional contact maps that only reason about the manipulated object and its relation with the hand, our enhanced maps take into account scene and task information. This comprehensive map is critical for hand-object interaction, enabling accurate grasping poses that align with the task. We propose a two-stage pipeline that first constructs a task-aware contact map informed by the scene and task. In the subsequent stage, we use this contact map to synthesize task-oriented human grasps. We introduce a new dataset and a metric for the proposed task to evaluate our approach. Our experiments validate the importance of modeling both scene and task, demonstrating significant improvements over existing methods in both grasp quality and task performance. See our project page for more details: https://hcis-lab.github.io/TOHGS/
null
https://arxiv.org/abs/2507.11287v1
https://arxiv.org/pdf/2507.11287v1.pdf
null
[ "An-Lun Liu", "Yu-Wei Chao", "Yi-Ting Chen" ]
[]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/tomato-multi-angle-multi-pose-dataset-for
2507.11279
null
null
Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping
Observer bias and inconsistencies in traditional plant phenotyping methods limit the accuracy and reproducibility of fine-grained plant analysis. To overcome these challenges, we developed TomatoMAP, a comprehensive dataset for Solanum lycopersicum using an Internet of Things (IoT) based imaging system with standardized data acquisition protocols. Our dataset contains 64,464 RGB images that capture 12 different plant poses from four camera elevation angles. Each image includes manually annotated bounding boxes for seven regions of interest (ROIs), including leaves, panicle, batch of flowers, batch of fruits, axillary shoot, shoot and whole plant area, along with 50 fine-grained growth stage classifications based on the BBCH scale. Additionally, we provide 3,616 high-resolution image subset with pixel-wise semantic and instance segmentation annotations for fine-grained phenotyping. We validated our dataset using a cascading model deep learning framework combining MobileNetv3 for classification, YOLOv11 for object detection, and MaskRCNN for segmentation. Through AI vs. Human analysis involving five domain experts, we demonstrate that the models trained on our dataset achieve accuracy and speed comparable to the experts. Cohen's Kappa and inter-rater agreement heatmap confirm the reliability of automated fine-grained phenotyping using our approach.
null
https://arxiv.org/abs/2507.11279v1
https://arxiv.org/pdf/2507.11279v1.pdf
null
[ "Yujie Zhang", "Sabine Struckmeyer", "Andreas Kolb", "Sven Reichardt" ]
[ "Instance Segmentation", "object-detection", "Object Detection", "Plant Phenotyping", "Semantic Segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Heatmap", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Heatmap", "source_title": "Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation", "source_url": "http://arxiv.org/abs/1406.2984v2" } ]
https://paperswithcode.com/paper/fast-last-iterate-convergence-of-sgd-in-the
2507.11274
null
null
Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime
We study population convergence guarantees of stochastic gradient descent (SGD) for smooth convex objectives in the interpolation regime, where the noise at optimum is zero or near zero. The behavior of the last iterate of SGD in this setting -- particularly with large (constant) stepsizes -- has received growing attention in recent years due to implications for the training of over-parameterized models, as well as to analyzing forgetting in continual learning and to understanding the convergence of the randomized Kaczmarz method for solving linear systems. We establish that after $T$ steps of SGD on $\beta$-smooth convex loss functions with stepsize $\eta \leq 1/\beta$, the last iterate exhibits expected excess risk $\widetilde{O}(1/(\eta T^{1-\beta\eta/2}) + \eta T^{\beta\eta/2} \sigma_\star^2)$, where $\sigma_\star^2$ denotes the variance of the stochastic gradients at the optimum. In particular, for a well-tuned stepsize we obtain a near optimal $\widetilde{O}(1/T + \sigma_\star/\sqrt{T})$ rate for the last iterate, extending the results of Varre et al. (2021) beyond least squares regression; and when $\sigma_\star=0$ we obtain a rate of $O(1/\sqrt{T})$ with $\eta=1/\beta$, improving upon the best-known $O(T^{-1/4})$ rate recently established by Evron et al. (2025) in the special case of realizable linear regression.
null
https://arxiv.org/abs/2507.11274v1
https://arxiv.org/pdf/2507.11274v1.pdf
null
[ "Amit Attia", "Matan Schliserman", "Uri Sherman", "Tomer Koren" ]
[ "Continual Learning" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/pytorch/pytorch/blob/4e0ac120e9a8b096069c2f892488d630a5c8f358/torch/optim/sgd.py#L97-L112", "description": "**Stochastic Gradient Descent** is an iterative optimization technique that uses minibatches of data to form an expectation of the gradient, rather than the full gradient using all available data. That is for weights $w$ and a loss function $L$ we have:\r\n\r\n$$ w\\_{t+1} = w\\_{t} - \\eta\\hat{\\nabla}\\_{w}{L(w\\_{t})} $$\r\n\r\nWhere $\\eta$ is a learning rate. SGD reduces redundancy compared to batch gradient descent - which recomputes gradients for similar examples before each parameter update - so it is usually much faster.\r\n\r\n(Image Source: [here](http://rasbt.github.io/mlxtend/user_guide/general_concepts/gradient-optimization/))", "full_name": "Stochastic Gradient Descent", "introduced_year": 1951, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "SGD", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/characonsist-fine-grained-consistent
2507.11533
null
null
CharaConsist: Fine-Grained Consistent Character Generation
In text-to-image generation, producing a series of consistent contents that preserve the same identity is highly valuable for real-world applications. Although a few works have explored training-free methods to enhance the consistency of generated subjects, we observe that they suffer from the following problems. First, they fail to maintain consistent background details, which limits their applicability. Furthermore, when the foreground character undergoes large motion variations, inconsistencies in identity and clothing details become evident. To address these problems, we propose CharaConsist, which employs point-tracking attention and adaptive token merge along with decoupled control of the foreground and background. CharaConsist enables fine-grained consistency for both foreground and background, supporting the generation of one character in continuous shots within a fixed scene or in discrete shots across different scenes. Moreover, CharaConsist is the first consistent generation method tailored for text-to-image DiT model. Its ability to maintain fine-grained consistency, combined with the larger capacity of latest base model, enables it to produce high-quality visual outputs, broadening its applicability to a wider range of real-world scenarios. The source code has been released at https://github.com/Murray-Wang/CharaConsist
null
https://arxiv.org/abs/2507.11533v1
https://arxiv.org/pdf/2507.11533v1.pdf
null
[ "Mengyu Wang", "Henghui Ding", "Jianing Peng", "Yao Zhao", "Yunpeng Chen", "Yunchao Wei" ]
[ "Consistent Character Generation", "Image Generation", "Point Tracking", "Text to Image Generation", "Text-to-Image Generation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Balanced Selection", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Active Learning", "parent": null }, "name": "BASE", "source_title": "Active Learning at the ImageNet Scale", "source_url": "https://arxiv.org/abs/2111.12880v1" } ]
https://paperswithcode.com/paper/catvis-context-aware-thought-visualization
2507.11522
null
null
CATVis: Context-Aware Thought Visualization
EEG-based brain-computer interfaces (BCIs) have shown promise in various applications, such as motor imagery and cognitive state monitoring. However, decoding visual representations from EEG signals remains a significant challenge due to their complex and noisy nature. We thus propose a novel 5-stage framework for decoding visual representations from EEG signals: (1) an EEG encoder for concept classification, (2) cross-modal alignment of EEG and text embeddings in CLIP feature space, (3) caption refinement via re-ranking, (4) weighted interpolation of concept and caption embeddings for richer semantics, and (5) image generation using a pre-trained Stable Diffusion model. We enable context-aware EEG-to-image generation through cross-modal alignment and re-ranking. Experimental results demonstrate that our method generates high-quality images aligned with visual stimuli, outperforming SOTA approaches by 13.43% in Classification Accuracy, 15.21% in Generation Accuracy and reducing Fr\'echet Inception Distance by 36.61%, indicating superior semantic alignment and image quality.
null
https://arxiv.org/abs/2507.11522v1
https://arxiv.org/pdf/2507.11522v1.pdf
null
[ "Tariq Mehmood", "Hamza Ahmad", "Muhammad Haroon Shakeel", "Murtaza Taj" ]
[ "cross-modal alignment", "EEG", "Image Generation", "Motor Imagery", "Re-Ranking" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/exploring-the-robustness-of-tractoracle
2507.11486
null
null
Exploring the robustness of TractOracle methods in RL-based tractography
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.
null
https://arxiv.org/abs/2507.11486v1
https://arxiv.org/pdf/2507.11486v1.pdf
null
[ "Jeremi Levesque", "Antoine Théberge", "Maxime Descoteaux", "Pierre-Marc Jodoin" ]
[ "Diffusion MRI", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/3c-fbi-a-combinatorial-method-using
2507.11476
null
null
3C-FBI: A Combinatorial method using Convolutions for Circle Fitting in Blurry Images
This paper addresses the fundamental computer vision challenge of robust circle detection and fitting in degraded imaging conditions. We present Combinatorial Convolution-based Circle Fitting for Blurry Images (3C-FBI), an algorithm that bridges the gap between circle detection and precise parametric fitting by combining (1) efficient combinatorial edge pixel (edgel) sampling and (2) convolution-based density estimation in parameter space. We evaluate 3C-FBI across three experimental frameworks: (1) real-world medical data from Parkinson's disease assessments (144 frames from 36 videos), (2) controlled synthetic data following established circle-fitting benchmarks, and (3) systematic analysis across varying spatial resolutions and outlier contamination levels. Results show that 3C-FBI achieves state-of-the-art accuracy (Jaccard index 0.896) while maintaining real-time performance (40.3 fps), significantly outperforming classical methods like RCD (6.8 fps) on a standard CPU (i7-10875H). It maintains near-perfect accuracy (Jaccard almost 1.0) at high resolutions (480x480) and reliable performance (Jaccard higher than 0.95) down to 160x160 with up to 20% outliers. In extensive synthetic testing, 3C-FBI achieves a mean Jaccard Index of 0.989 across contamination levels, comparable to modern methods like Qi et al. (2024, 0.991), and surpassing RHT (0.964). This combination of accuracy, speed, and robustness makes 3C-FBI ideal for medical imaging, robotics, and industrial inspection under challenging conditions.
null
https://arxiv.org/abs/2507.11476v1
https://arxiv.org/pdf/2507.11476v1.pdf
null
[ "Esteban Román Catafau", "Torbjörn E. M. Nordling" ]
[ "CPU", "Density Estimation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hug-vas-a-hierarchical-nurbs-based-generative
2507.11474
null
null
HUG-VAS: A Hierarchical NURBS-Based Generative Model for Aortic Geometry Synthesis and Controllable Editing
Accurate characterization of vascular geometry is essential for cardiovascular diagnosis and treatment planning. Traditional statistical shape modeling (SSM) methods rely on linear assumptions, limiting their expressivity and scalability to complex topologies such as multi-branch vascular structures. We introduce HUG-VAS, a Hierarchical NURBS Generative model for Vascular geometry Synthesis, which integrates NURBS surface parameterization with diffusion-based generative modeling to synthesize realistic, fine-grained aortic geometries. Trained with 21 patient-specific samples, HUG-VAS generates anatomically faithful aortas with supra-aortic branches, yielding biomarker distributions that closely match those of the original dataset. HUG-VAS adopts a hierarchical architecture comprising a denoising diffusion model that generates centerlines and a guided diffusion model that synthesizes radial profiles conditioned on those centerlines, thereby capturing two layers of anatomical variability. Critically, the framework supports zero-shot conditional generation from image-derived priors, enabling practical applications such as interactive semi-automatic segmentation, robust reconstruction under degraded imaging conditions, and implantable device optimization. To our knowledge, HUG-VAS is the first SSM framework to bridge image-derived priors with generative shape modeling via a unified integration of NURBS parameterization and hierarchical diffusion processes.
null
https://arxiv.org/abs/2507.11474v1
https://arxiv.org/pdf/2507.11474v1.pdf
null
[ "Pan Du", "Mingqi Xu", "Xiaozhi Zhu", "Jian-Xun Wang" ]
[ "Denoising" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/elevating-3d-models-high-quality-texture-and
2507.11465
null
null
Elevating 3D Models: High-Quality Texture and Geometry Refinement from a Low-Quality Model
High-quality 3D assets are essential for various applications in computer graphics and 3D vision but remain scarce due to significant acquisition costs. To address this shortage, we introduce Elevate3D, a novel framework that transforms readily accessible low-quality 3D assets into higher quality. At the core of Elevate3D is HFS-SDEdit, a specialized texture enhancement method that significantly improves texture quality while preserving the appearance and geometry while fixing its degradations. Furthermore, Elevate3D operates in a view-by-view manner, alternating between texture and geometry refinement. Unlike previous methods that have largely overlooked geometry refinement, our framework leverages geometric cues from images refined with HFS-SDEdit by employing state-of-the-art monocular geometry predictors. This approach ensures detailed and accurate geometry that aligns seamlessly with the enhanced texture. Elevate3D outperforms recent competitors by achieving state-of-the-art quality in 3D model refinement, effectively addressing the scarcity of high-quality open-source 3D assets.
null
https://arxiv.org/abs/2507.11465v1
https://arxiv.org/pdf/2507.11465v1.pdf
null
[ "Nuri Ryu", "Jiyun Won", "Jooeun Son", "Minsu Gong", "Joo-Haeng Lee", "Sunghyun Cho" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-equilibrium-models-for-poisson-imaging
2507.11461
null
null
Deep Equilibrium models for Poisson Imaging Inverse problems via Mirror Descent
Deep Equilibrium Models (DEQs) are implicit neural networks with fixed points, which have recently gained attention for learning image regularization functionals, particularly in settings involving Gaussian fidelities, where assumptions on the forward operator ensure contractiveness of standard (proximal) Gradient Descent operators. In this work, we extend the application of DEQs to Poisson inverse problems, where the data fidelity term is more appropriately modeled by the Kullback-Leibler divergence. To this end, we introduce a novel DEQ formulation based on Mirror Descent defined in terms of a tailored non-Euclidean geometry that naturally adapts with the structure of the data term. This enables the learning of neural regularizers within a principled training framework. We derive sufficient conditions to guarantee the convergence of the learned reconstruction scheme and propose computational strategies that enable both efficient training and fully parameter-free inference. Numerical experiments show that our method outperforms traditional model-based approaches and it is comparable to the performance of Bregman Plug-and-Play methods, while mitigating their typical drawbacks - namely, sensitivity to initialization and careful tuning of hyperparameters. The code is publicly available at https://github.com/christiandaniele/DEQ-MD.
null
https://arxiv.org/abs/2507.11461v1
https://arxiv.org/pdf/2507.11461v1.pdf
null
[ "Christian Daniele", "Silvia Villa", "Samuel Vaiter", "Luca Calatroni" ]
[]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "A new kind of implicit models, where the output of the network is defined as the solution to an \"infinite-level\" fixed point equation. Thanks to this we can compute the gradient of the output without activations and therefore with a significantly reduced memory footprint.", "full_name": "Deep Equilibrium Models", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Robust Training", "parent": null }, "name": "DEQ", "source_title": "Deep Equilibrium Models", "source_url": "https://arxiv.org/abs/1909.01377v2" } ]
https://paperswithcode.com/paper/lrmr-llm-driven-relational-multi-node-ranking
2507.11457
null
null
LRMR: LLM-Driven Relational Multi-node Ranking for Lymph Node Metastasis Assessment in Rectal Cancer
Accurate preoperative assessment of lymph node (LN) metastasis in rectal cancer guides treatment decisions, yet conventional MRI evaluation based on morphological criteria shows limited diagnostic performance. While some artificial intelligence models have been developed, they often operate as black boxes, lacking the interpretability needed for clinical trust. Moreover, these models typically evaluate nodes in isolation, overlooking the patient-level context. To address these limitations, we introduce LRMR, an LLM-Driven Relational Multi-node Ranking framework. This approach reframes the diagnostic task from a direct classification problem into a structured reasoning and ranking process. The LRMR framework operates in two stages. First, a multimodal large language model (LLM) analyzes a composite montage image of all LNs from a patient, generating a structured report that details ten distinct radiological features. Second, a text-based LLM performs pairwise comparisons of these reports between different patients, establishing a relative risk ranking based on the severity and number of adverse features. We evaluated our method on a retrospective cohort of 117 rectal cancer patients. LRMR achieved an area under the curve (AUC) of 0.7917 and an F1-score of 0.7200, outperforming a range of deep learning baselines, including ResNet50 (AUC 0.7708). Ablation studies confirmed the value of our two main contributions: removing the relational ranking stage or the structured prompting stage led to a significant performance drop, with AUCs falling to 0.6875 and 0.6458, respectively. Our work demonstrates that decoupling visual perception from cognitive reasoning through a two-stage LLM framework offers a powerful, interpretable, and effective new paradigm for assessing lymph node metastasis in rectal cancer.
null
https://arxiv.org/abs/2507.11457v1
https://arxiv.org/pdf/2507.11457v1.pdf
null
[ "Yaoxian Dong", "Yifan Gao", "Haoyue Li", "Yanfen Cui", "Xin Gao" ]
[ "Diagnostic", "Large Language Model", "Multimodal Large Language Model" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/implementing-adaptations-for-vision
2507.11441
null
null
Implementing Adaptations for Vision AutoRegressive Model
Vision AutoRegressive model (VAR) was recently introduced as an alternative to Diffusion Models (DMs) in image generation domain. In this work we focus on its adaptations, which aim to fine-tune pre-trained models to perform specific downstream tasks, like medical data generation. While for DMs there exist many techniques, adaptations for VAR remain underexplored. Similarly, differentially private (DP) adaptations-ones that aim to preserve privacy of the adaptation data-have been extensively studied for DMs, while VAR lacks such solutions. In our work, we implement and benchmark many strategies for VAR, and compare them to state-of-the-art DM adaptation strategies. We observe that VAR outperforms DMs for non-DP adaptations, however, the performance of DP suffers, which necessitates further research in private adaptations for VAR. Code is available at https://github.com/sprintml/finetuning_var_dp.
null
https://arxiv.org/abs/2507.11441v1
https://arxiv.org/pdf/2507.11441v1.pdf
null
[ "Kaif Shaikh", "Antoni Kowalczuk", "Franziska Boenisch", "Adam Dziedzic" ]
[ "Image Generation", "model" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/toward-improving-fnirs-classification-a-study
2507.11436
null
null
Toward Improving fNIRS Classification: A Study on Activation Functions in Deep Neural Architectures
Activation functions are critical to the performance of deep neural networks, particularly in domains such as functional near-infrared spectroscopy (fNIRS), where nonlinearity, low signal-to-noise ratio (SNR), and signal variability poses significant challenges to model accuracy. However, the impact of activation functions on deep learning (DL) performance in the fNIRS domain remains underexplored and lacks systematic investigation in the current literature. This study evaluates a range of conventional and field-specific activation functions for fNIRS classification tasks using multiple deep learning architectures, including the domain-specific fNIRSNet, AbsoluteNet, MDNN, and shallowConvNet (as the baseline), all tested on a single dataset recorded during an auditory task. To ensure fair a comparison, all networks were trained and tested using standardized preprocessing and consistent training parameters. The results show that symmetrical activation functions such as Tanh and the Absolute value function Abs(x) can outperform commonly used functions like the Rectified Linear Unit (ReLU), depending on the architecture. Additionally, a focused analysis of the role of symmetry was conducted using a Modified Absolute Function (MAF), with results further supporting the effectiveness of symmetrical activation functions on performance gains. These findings underscore the importance of selecting proper activation functions that align with the signal characteristics of fNIRS data.
null
https://arxiv.org/abs/2507.11436v1
https://arxiv.org/pdf/2507.11436v1.pdf
null
[ "Behtom Adeli", "John Mclinden", "Pankaj Pandey", "Ming Shao", "Yalda Shahriari" ]
[]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "In the ALIGN method, visual and language representations are jointly trained from noisy image alt-text data. The image and text encoders are learned via contrastive loss (formulated as normalized softmax) that pushes the embeddings of the matched image-text pair together and pushing those of non-matched image-text pair apart. The model learns to align visual and language representations of the image and text pairs using the contrastive loss. The representations can be used for vision-only or vision-language task transfer. Without any fine-tuning, ALIGN powers zero-shot visual classification and cross-modal search including image-to-text search, text-to image search and even search with joint image+text queries.", "full_name": "ALIGN", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "Involves models that adapt pre-training to the field of Vision-and-Language (V-L) learning and improve the performance on downstream tasks like visual question answering and visual captioning.\r\n\r\nAccording to [Du et al. (2022)](https://arxiv.org/pdf/2202.10936.pdf), information coming from the different modalities can be encoded in three ways: fusion encoder, dual encoder, and a combination of both. \r\n\r\nReferences:\r\n\r\n- [A Survey of Vision-Language Pre-Trained Models](https://arxiv.org/pdf/2202.10936.pdf)\r\n- [Vision Language models: towards multi-modal deep learning](https://theaisummer.com/vision-language-models/)", "name": "Vision and Language Pre-Trained Models", "parent": null }, "name": "ALIGN", "source_title": "Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision", "source_url": "https://arxiv.org/abs/2102.05918v2" } ]
https://paperswithcode.com/paper/u-rwkv-lightweight-medical-image-segmentation
2507.11415
null
null
U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV
Achieving equity in healthcare accessibility requires lightweight yet high-performance solutions for medical image segmentation, particularly in resource-limited settings. Existing methods like U-Net and its variants often suffer from limited global Effective Receptive Fields (ERFs), hindering their ability to capture long-range dependencies. To address this, we propose U-RWKV, a novel framework leveraging the Recurrent Weighted Key-Value(RWKV) architecture, which achieves efficient long-range modeling at O(N) computational cost. The framework introduces two key innovations: the Direction-Adaptive RWKV Module(DARM) and the Stage-Adaptive Squeeze-and-Excitation Module(SASE). DARM employs Dual-RWKV and QuadScan mechanisms to aggregate contextual cues across images, mitigating directional bias while preserving global context and maintaining high computational efficiency. SASE dynamically adapts its architecture to different feature extraction stages, balancing high-resolution detail preservation and semantic relationship capture. Experiments demonstrate that U-RWKV achieves state-of-the-art segmentation performance with high computational efficiency, offering a practical solution for democratizing advanced medical imaging technologies in resource-constrained environments. The code is available at https://github.com/hbyecoding/U-RWKV.
null
https://arxiv.org/abs/2507.11415v1
https://arxiv.org/pdf/2507.11415v1.pdf
null
[ "Hongbo Ye", "Fenghe Tang", "Peiang Zhao", "Zhen Huang", "Dexin Zhao", "Minghao Bian", "S. Kevin Zhou" ]
[ "Computational Efficiency", "Image Segmentation", "Long-range modeling", "Medical Image Segmentation", "Semantic Segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/stochastic-entanglement-configuration-for
2507.11401
null
null
Stochastic Entanglement Configuration for Constructive Entanglement Topologies in Quantum Machine Learning with Application to Cardiac MRI
Efficient entanglement strategies are essential for advancing variational quantum circuits (VQCs) for quantum machine learning (QML). However, most current approaches use fixed entanglement topologies that are not adaptive to task requirements, limiting potential gains over classical models. We introduce a novel stochastic entanglement configuration method that systematically generates diverse entanglement topologies to identify a subspace of constructive entanglement configurations, defined as entanglement topologies that boost hybrid model performance (e.g., classification accuracy) beyond classical baselines. Each configuration is encoded as a stochastic binary matrix, denoting directed entanglement between qubits. This enables scalable exploration of the hyperspace of candidate entanglement topologies using entanglement density and per-qubit constraints as key metrics. We define unconstrained and constrained sampling modes, controlling entanglement per qubit. Using our method, 400 stochastic configurations were generated and evaluated in a hybrid QML for cardiac MRI disease classification. We identified 64 (16%) novel constructive entanglement configurations that consistently outperformed the classical baseline. Ensemble aggregation of top-performing configurations achieved ~0.92 classification accuracy, exceeding the classical model (~0.87) by over 5%. Compared to four conventional topologies (ring, nearest neighbor, no entanglement, fully entangled), none surpassed the classical baseline (maximum accuracy ~0.82), while our configurations delivered up to ~20% higher accuracy. Thus, highlighting the robustness and generalizability of the identified constructive entanglements.
null
https://arxiv.org/abs/2507.11401v1
https://arxiv.org/pdf/2507.11401v1.pdf
null
[ "Mehri Mehrnia", "Mohammed S. M. Elbaz" ]
[ "Quantum Machine Learning" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-model-is-the-message-lightweight
2507.11400
null
null
The model is the message: Lightweight convolutional autoencoders applied to noisy imaging data for planetary science and astrobiology
The application of convolutional autoencoder deep learning to imaging data for planetary science and astrobiological use is briefly reviewed and explored with a focus on the need to understand algorithmic rationale, process, and results when machine learning is utilized. Successful autoencoders train to build a model that captures the features of data in a dimensionally reduced form (the latent representation) that can then be used to recreate the original input. One application is the reconstruction of incomplete or noisy data. Here a baseline, lightweight convolutional autoencoder is used to examine the utility for planetary image reconstruction or inpainting in situations where there is destructive random noise (i.e., either luminance noise with zero returned data in some image pixels, or color noise with random additive levels across pixel channels). It is shown that, in certain use cases, multi-color image reconstruction can be usefully applied even with extensive random destructive noise with 90% areal coverage and higher. This capability is discussed in the context of intentional masking to reduce data bandwidth, or situations with low-illumination levels and other factors that obscure image data (e.g., sensor degradation or atmospheric conditions). It is further suggested that for some scientific use cases the model latent space and representations have more utility than large raw imaging datasets.
null
https://arxiv.org/abs/2507.11400v1
https://arxiv.org/pdf/2507.11400v1.pdf
null
[ "Caleb Scharf" ]
[ "Image Reconstruction" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/attributes-shape-the-embedding-space-of-face
2507.11372
null
null
Attributes Shape the Embedding Space of Face Recognition Models
Face Recognition (FR) tasks have made significant progress with the advent of Deep Neural Networks, particularly through margin-based triplet losses that embed facial images into high-dimensional feature spaces. During training, these contrastive losses focus exclusively on identity information as labels. However, we observe a multiscale geometric structure emerging in the embedding space, influenced by interpretable facial (e.g., hair color) and image attributes (e.g., contrast). We propose a geometric approach to describe the dependence or invariance of FR models to these attributes and introduce a physics-inspired alignment metric. We evaluate the proposed metric on controlled, simplified models and widely used FR models fine-tuned with synthetic data for targeted attribute augmentation. Our findings reveal that the models exhibit varying degrees of invariance across different attributes, providing insight into their strengths and weaknesses and enabling deeper interpretability. Code available here: https://github.com/mantonios107/attrs-fr-embs}{https://github.com/mantonios107/attrs-fr-embs
null
https://arxiv.org/abs/2507.11372v1
https://arxiv.org/pdf/2507.11372v1.pdf
null
[ "Pierrick Leroy", "Antonio Mastropietro", "Marco Nurisso", "Francesco Vaccarino" ]
[ "Attribute", "Face Recognition", "Triplet" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/monomvsnet-monocular-priors-guided-multi-view
2507.11333
null
null
MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network
Learning-based Multi-View Stereo (MVS) methods aim to predict depth maps for a sequence of calibrated images to recover dense point clouds. However, existing MVS methods often struggle with challenging regions, such as textureless regions and reflective surfaces, where feature matching fails. In contrast, monocular depth estimation inherently does not require feature matching, allowing it to achieve robust relative depth estimation in these regions. To bridge this gap, we propose MonoMVSNet, a novel monocular feature and depth guided MVS network that integrates powerful priors from a monocular foundation model into multi-view geometry. Firstly, the monocular feature of the reference view is integrated into source view features by the attention mechanism with a newly designed cross-view position encoding. Then, the monocular depth of the reference view is aligned to dynamically update the depth candidates for edge regions during the sampling procedure. Finally, a relative consistency loss is further designed based on the monocular depth to supervise the depth prediction. Extensive experiments demonstrate that MonoMVSNet achieves state-of-the-art performance on the DTU and Tanks-and-Temples datasets, ranking first on the Tanks-and-Temples Intermediate and Advanced benchmarks. The source code is available at https://github.com/JianfeiJ/MonoMVSNet.
null
https://arxiv.org/abs/2507.11333v1
https://arxiv.org/pdf/2507.11333v1.pdf
null
[ "Jianfei Jiang", "Qiankun Liu", "Haochen Yu", "Hongyuan Liu", "Liyong Wang", "Jiansheng Chen", "Huimin Ma" ]
[ "Depth Estimation", "Depth Prediction", "Monocular Depth Estimation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/quantitative-multi-metabolite-imaging-of
2507.11329
null
null
Quantitative multi-metabolite imaging of Parkinson's disease using AI boosted molecular MRI
Traditional approaches for molecular imaging of Parkinson's disease (PD) in vivo require radioactive isotopes, lengthy scan times, or deliver only low spatial resolution. Recent advances in saturation transfer-based PD magnetic resonance imaging (MRI) have provided biochemical insights, although the image contrast is semi-quantitative and nonspecific. Here, we combined a rapid molecular MRI acquisition paradigm with deep learning based reconstruction for multi-metabolite quantification of glutamate, mobile proteins, semisolid, and mobile macromolecules in an acute MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) mouse model. The quantitative parameter maps are in general agreement with the histology and MR spectroscopy, and demonstrate that semisolid magnetization transfer (MT), amide, and aliphatic relayed nuclear Overhauser effect (rNOE) proton volume fractions may serve as PD biomarkers.
null
https://arxiv.org/abs/2507.11329v1
https://arxiv.org/pdf/2507.11329v1.pdf
null
[ "Hagar Shmuely", "Michal Rivlin", "Or Perlman" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hans-net-hyperbolic-convolution-and-adaptive
2507.11325
null
null
HANS-Net: Hyperbolic Convolution and Adaptive Temporal Attention for Accurate and Generalizable Liver and Tumor Segmentation in CT Imaging
Accurate liver and tumor segmentation on abdominal CT images is critical for reliable diagnosis and treatment planning, but remains challenging due to complex anatomical structures, variability in tumor appearance, and limited annotated data. To address these issues, we introduce Hyperbolic-convolutions Adaptive-temporal-attention with Neural-representation and Synaptic-plasticity Network (HANS-Net), a novel segmentation framework that synergistically combines hyperbolic convolutions for hierarchical geometric representation, a wavelet-inspired decomposition module for multi-scale texture learning, a biologically motivated synaptic plasticity mechanism for adaptive feature enhancement, and an implicit neural representation branch to model fine-grained and continuous anatomical boundaries. Additionally, we incorporate uncertainty-aware Monte Carlo dropout to quantify prediction confidence and lightweight temporal attention to improve inter-slice consistency without sacrificing efficiency. Extensive evaluations of the LiTS dataset demonstrate that HANS-Net achieves a mean Dice score of 93.26%, an IoU of 88.09%, an average symmetric surface distance (ASSD) of 0.72 mm, and a volume overlap error (VOE) of 11.91%. Furthermore, cross-dataset validation on the 3D-IRCADb-01 dataset obtains an average Dice of 87.45%, IoU of 80.30%, ASSD of 1.525 mm, and VOE of 19.71%, indicating strong generalization across different datasets. These results confirm the effectiveness and robustness of HANS-Net in providing anatomically consistent, accurate, and confident liver and tumor segmentation.
null
https://arxiv.org/abs/2507.11325v1
https://arxiv.org/pdf/2507.11325v1.pdf
null
[ "Arefin Ittesafun Abian", "Ripon Kumar Debnath", "Md. Abdur Rahman", "Mohaimenul Azam Khan Raiaan", "Md Rafiqul Islam", "Asif Karim", "Reem E. Mohamed", "Sami Azam" ]
[ "Tumor Segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": null, "description": "", "full_name": "Monte Carlo Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Interpretability Methods** seek to explain the predictions made by neural networks by introducing mechanisms to enduce or enforce interpretability. For example, LIME approximates the neural network with a locally interpretable model. Below you can find a continuously updating list of interpretability methods.", "name": "Interpretability", "parent": null }, "name": "Monte Carlo Dropout", "source_title": "Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning", "source_url": "http://arxiv.org/abs/1506.02142v6" } ]
https://paperswithcode.com/paper/viewsrd-3d-visual-grounding-via-structured
2507.11261
null
null
ViewSRD: 3D Visual Grounding via Structured Multi-View Decomposition
3D visual grounding aims to identify and localize objects in a 3D space based on textual descriptions. However, existing methods struggle with disentangling targets from anchors in complex multi-anchor queries and resolving inconsistencies in spatial descriptions caused by perspective variations. To tackle these challenges, we propose ViewSRD, a framework that formulates 3D visual grounding as a structured multi-view decomposition process. First, the Simple Relation Decoupling (SRD) module restructures complex multi-anchor queries into a set of targeted single-anchor statements, generating a structured set of perspective-aware descriptions that clarify positional relationships. These decomposed representations serve as the foundation for the Multi-view Textual-Scene Interaction (Multi-TSI) module, which integrates textual and scene features across multiple viewpoints using shared, Cross-modal Consistent View Tokens (CCVTs) to preserve spatial correlations. Finally, a Textual-Scene Reasoning module synthesizes multi-view predictions into a unified and robust 3D visual grounding. Experiments on 3D visual grounding datasets show that ViewSRD significantly outperforms state-of-the-art methods, particularly in complex queries requiring precise spatial differentiation.
null
https://arxiv.org/abs/2507.11261v1
https://arxiv.org/pdf/2507.11261v1.pdf
null
[ "Ronggang Huang", "Haoxin Yang", "Yan Cai", "Xuemiao Xu", "Huaidong Zhang", "Shengfeng He" ]
[ "3D visual grounding", "Visual Grounding" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/streaming-4d-visual-geometry-transformer
2507.11539
null
null
Streaming 4D Visual Geometry Transformer
Perceiving and reconstructing 4D spatial-temporal geometry from videos is a fundamental yet challenging computer vision task. To facilitate interactive and real-time applications, we propose a streaming 4D visual geometry transformer that shares a similar philosophy with autoregressive large language models. We explore a simple and efficient design and employ a causal transformer architecture to process the input sequence in an online manner. We use temporal causal attention and cache the historical keys and values as implicit memory to enable efficient streaming long-term 4D reconstruction. This design can handle real-time 4D reconstruction by incrementally integrating historical information while maintaining high-quality spatial consistency. For efficient training, we propose to distill knowledge from the dense bidirectional visual geometry grounded transformer (VGGT) to our causal model. For inference, our model supports the migration of optimized efficient attention operator (e.g., FlashAttention) from the field of large language models. Extensive experiments on various 4D geometry perception benchmarks demonstrate that our model increases the inference speed in online scenarios while maintaining competitive performance, paving the way for scalable and interactive 4D vision systems. Code is available at: https://github.com/wzzheng/StreamVGGT.
null
https://arxiv.org/abs/2507.11539v1
https://arxiv.org/pdf/2507.11539v1.pdf
null
[ "Dong Zhuo", "Wenzhao Zheng", "Jiahe Guo", "Yuqi Wu", "Jie zhou", "Jiwen Lu" ]
[ "4D reconstruction", "Philosophy" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/lorenzopapa5/SPEED", "description": "The monocular depth estimation (MDE) is the task of estimating depth from a single frame. This information is an essential knowledge in many computer vision tasks such as scene understanding and visual odometry, which are key components in autonomous and robotic systems. \r\nApproaches based on the state of the art vision transformer architectures are extremely deep and complex not suitable for real-time inference operations on edge and autonomous systems equipped with low resources (i.e. robot indoor navigation and surveillance). This paper presents SPEED, a Separable Pyramidal pooling EncodEr-Decoder architecture designed to achieve real-time frequency performances on multiple hardware platforms. The proposed model is a fast-throughput deep architecture for MDE able to obtain depth estimations with high accuracy from low resolution images using minimum hardware resources (i.e. edge devices). Our encoder-decoder model exploits two depthwise separable pyramidal pooling layers, which allow to increase the inference frequency while reducing the overall computational complexity. The proposed method performs better than other fast-throughput architectures in terms of both accuracy and frame rates, achieving real-time performances over cloud CPU, TPU and the NVIDIA Jetson TX1 on two indoor benchmarks: the NYU Depth v2 and the DIML Kinect v2 datasets.", "full_name": "SPEED: Separable Pyramidal Pooling EncodEr-Decoder for Real-Time Monocular Depth Estimation on Low-Resource Settings", "introduced_year": 2000, "main_collection": null, "name": "SPEED", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/colibri-fuzzy-model-color-linguistic-based
2507.11488
null
null
COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation
Colors are omnipresent in today's world and play a vital role in how humans perceive and interact with their surroundings. However, it is challenging for computers to imitate human color perception. This paper introduces the Human Perception-Based Fuzzy Color Model, COLIBRI (Color Linguistic-Based Representation and Interpretation), designed to bridge the gap between computational color representations and human visual perception. The proposed model uses fuzzy sets and logic to create a framework for color categorization. Using a three-phase experimental approach, the study first identifies distinguishable color stimuli for hue, saturation, and intensity through preliminary experiments, followed by a large-scale human categorization survey involving more than 1000 human subjects. The resulting data are used to extract fuzzy partitions and generate membership functions that reflect real-world perceptual uncertainty. The model incorporates a mechanism for adaptation that allows refinement based on feedback and contextual changes. Comparative evaluations demonstrate the model's alignment with human perception compared to traditional color models, such as RGB, HSV, and LAB. To the best of our knowledge, no previous research has documented the construction of a model for color attribute specification based on a sample of this size or a comparable sample of the human population (n = 2496). Our findings are significant for fields such as design, artificial intelligence, marketing, and human-computer interaction, where perceptually relevant color representation is critical.
null
https://arxiv.org/abs/2507.11488v1
https://arxiv.org/pdf/2507.11488v1.pdf
null
[ "Pakizar Shamoi", "Nuray Toganas", "Muragul Muratbekova", "Elnara Kadyrgali", "Adilet Yerkin", "Ayan Igali", "Malika Ziyada", "Ayana Adilova", "Aron Karatayev", "Yerdauit Torekhan" ]
[ "Attribute", "Marketing" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ugc-videocaptioner-an-omni-ugc-video-detail
2507.11336
null
null
UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks
Real-world user-generated videos, especially on platforms like TikTok, often feature rich and intertwined audio visual content. However, existing video captioning benchmarks and models remain predominantly visual centric, overlooking the crucial role of audio in conveying scene dynamics, speaker intent, and narrative context. This lack of omni datasets and lightweight, capable models hampers progress in fine grained, multimodal video understanding. To address these challenges, we introduce UGC-VideoCap, a new benchmark and model framework specifically designed for detailed omnimodal captioning of short form user-generated videos. Unlike prior datasets, UGC-VideoCap emphasizes balanced integration of audio and visual modalities, featuring 1000 TikTok videos annotated through a structured three stage human-in-the-loop pipeline covering audio only, visual only, and joint audio visual semantics. The benchmark also includes 4000 carefully crafted QA pairs probing both unimodal and cross modal understanding. Alongside the dataset, we propose UGC-VideoCaptioner(3B), a 3B parameter captioning model distilled from Gemini 2.5 Flash. Using a novel two-stage training strategy supervised fine tuning followed by Group Relative Policy Optimization (GRPO), our approach enables efficient adaptation from limited data while maintaining competitive performance. Together, our benchmark and model offer a high-quality foundation and a data-efficient solution for advancing omnimodal video captioning in unconstrained real-world UGC settings.
null
https://arxiv.org/abs/2507.11336v1
https://arxiv.org/pdf/2507.11336v1.pdf
null
[ "Peiran Wu", "Yunze Liu", "Zhengdong Zhu", "Enmin Zhou", "Shawn Shen" ]
[ "Video Captioning", "Video Understanding" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dcr-quantifying-data-contamination-in-llms
2507.11405
null
null
DCR: Quantifying Data Contamination in LLMs Evaluation
The rapid advancement of large language models (LLMs) has heightened concerns about benchmark data contamination (BDC), where models inadvertently memorize evaluation data, inflating performance metrics and undermining genuine generalization assessment. This paper introduces the Data Contamination Risk (DCR) framework, a lightweight, interpretable pipeline designed to detect and quantify BDC across four granular levels: semantic, informational, data, and label. By synthesizing contamination scores via a fuzzy inference system, DCR produces a unified DCR Factor that adjusts raw accuracy to reflect contamination-aware performance. Validated on 9 LLMs (0.5B-72B) across sentiment analysis, fake news detection, and arithmetic reasoning tasks, the DCR framework reliably diagnoses contamination severity and with accuracy adjusted using the DCR Factor to within 4% average error across the three benchmarks compared to the uncontaminated baseline. Emphasizing computational efficiency and transparency, DCR provides a practical tool for integrating contamination assessment into routine evaluations, fostering fairer comparisons and enhancing the credibility of LLM benchmarking practices.
The rapid advancement of large language models (LLMs) has heightened concerns about benchmark data contamination (BDC), where models inadvertently memorize evaluation data, inflating performance metrics and undermining genuine generalization assessment.
https://arxiv.org/abs/2507.11405v1
https://arxiv.org/pdf/2507.11405v1.pdf
null
[ "Cheng Xu", "Nan Yan", "Shuhao Guan", "Changhong Jin", "Yuke Mei", "Yibing Guo", "M-Tahar Kechadi" ]
[ "Arithmetic Reasoning", "Benchmarking", "Computational Efficiency", "Fake News Detection", "Sentiment Analysis" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deteccion-y-cuantificacion-de-erosion-fluvial
2507.11301
null
null
Detección y Cuantificación de Erosión Fluvial con Visión Artificial
Fluvial erosion is a natural process that can generate significant impacts on soil stability and strategic infrastructures. The detection and monitoring of this phenomenon is traditionally addressed by photogrammetric methods and analysis in geographic information systems. These tasks require specific knowledge and intensive manual processing. This study proposes an artificial intelligence-based approach for automatic identification of eroded zones and estimation of their area. The state-of-the-art computer vision model YOLOv11, adjusted by fine-tuning and trained with photographs and LiDAR images, is used. This combined dataset was segmented and labeled using the Roboflow platform. Experimental results indicate efficient detection of erosion patterns with an accuracy of 70%, precise identification of eroded areas and reliable calculation of their extent in pixels and square meters. As a final product, the EROSCAN system has been developed, an interactive web application that allows users to upload images and obtain automatic segmentations of fluvial erosion, together with the estimated area. This tool optimizes the detection and quantification of the phenomenon, facilitating decision making in risk management and territorial planning.
null
https://arxiv.org/abs/2507.11301v1
https://arxiv.org/pdf/2507.11301v1.pdf
null
[ "Paúl Maji", "Marlon Túquerres", "Stalin Valencia", "Marcela Valenzuela", "Christian Mejia-Escobar" ]
[ "Decision Making" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/3d-magnetic-inverse-routine-for-single
2507.11293
null
null
3D Magnetic Inverse Routine for Single-Segment Magnetic Field Images
In semiconductor packaging, accurately recovering 3D information is crucial for non-destructive testing (NDT) to localize circuit defects. This paper presents a novel approach called the 3D Magnetic Inverse Routine (3D MIR), which leverages Magnetic Field Images (MFI) to retrieve the parameters for the 3D current flow of a single-segment. The 3D MIR integrates a deep learning (DL)-based Convolutional Neural Network (CNN), spatial-physics-based constraints, and optimization techniques. The method operates in three stages: i) The CNN model processes the MFI data to predict ($\ell/z_o$), where $\ell$ is the wire length and $z_o$ is the wire's vertical depth beneath the magnetic sensors and classify segment type ($c$). ii) By leveraging spatial-physics-based constraints, the routine provides initial estimates for the position ($x_o$, $y_o$, $z_o$), length ($\ell$), current ($I$), and current flow direction (positive or negative) of the current segment. iii) An optimizer then adjusts these five parameters ($x_o$, $y_o$, $z_o$, $\ell$, $I$) to minimize the difference between the reconstructed MFI and the actual MFI. The results demonstrate that the 3D MIR method accurately recovers 3D information with high precision, setting a new benchmark for magnetic image reconstruction in semiconductor packaging. This method highlights the potential of combining DL and physics-driven optimization in practical applications.
null
https://arxiv.org/abs/2507.11293v1
https://arxiv.org/pdf/2507.11293v1.pdf
null
[ "J. Senthilnath", "Chen Hao", "F. C. Wellstood" ]
[ "Image Reconstruction" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fine-grained-chinese-hate-speech
2507.11292
null
null
Fine-Grained Chinese Hate Speech Understanding: Span-Level Resources, Coded Term Lexicon, and Enhanced Detection Frameworks
The proliferation of hate speech has inflicted significant societal harm, with its intensity and directionality closely tied to specific targets and arguments. In recent years, numerous machine learning-based methods have been developed to detect hateful comments on online platforms automatically. However, research on Chinese hate speech detection lags behind, and interpretability studies face two major challenges: first, the scarcity of span-level fine-grained annotated datasets limits models' deep semantic understanding of hate speech; second, insufficient research on identifying and interpreting coded hate speech restricts model explainability in complex real-world scenarios. To address these, we make the following contributions: (1) We introduce the Span-level Target-Aware Toxicity Extraction dataset (STATE ToxiCN), the first span-level Chinese hate speech dataset, and evaluate the hate semantic understanding of existing models using it. (2) We conduct the first comprehensive study on Chinese coded hate terms, LLMs' ability to interpret hate semantics. (3) We propose a method to integrate an annotated lexicon into models, significantly enhancing hate speech detection performance. Our work provides valuable resources and insights to advance the interpretability of Chinese hate speech detection research.
null
https://arxiv.org/abs/2507.11292v1
https://arxiv.org/pdf/2507.11292v1.pdf
null
[ "Zewen Bai", "Liang Yang", "Shengdi Yin", "Yuanyuan Sun", "Hongfei Lin" ]
[ "Hate Speech Detection" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mfgdiffusion-mask-guided-smoke-synthesis-for
2507.11252
null
null
MFGDiffusion: Mask-Guided Smoke Synthesis for Enhanced Forest Fire Detection
Smoke is the first visible indicator of a wildfire.With the advancement of deep learning, image-based smoke detection has become a crucial method for detecting and preventing forest fires. However, the scarcity of smoke image data from forest fires is one of the significant factors hindering the detection of forest fire smoke. Image generation models offer a promising solution for synthesizing realistic smoke images. However, current inpainting models exhibit limitations in generating high-quality smoke representations, particularly manifesting as inconsistencies between synthesized smoke and background contexts. To solve these problems, we proposed a comprehensive framework for generating forest fire smoke images. Firstly, we employed the pre-trained segmentation model and the multimodal model to obtain smoke masks and image captions.Then, to address the insufficient utilization of masks and masked images by inpainting models, we introduced a network architecture guided by mask and masked image features. We also proposed a new loss function, the mask random difference loss, which enhances the consistency of the generated effects around the mask by randomly expanding and eroding the mask edges.Finally, to generate a smoke image dataset using random masks for subsequent detection tasks, we incorporated smoke characteristics and use a multimodal large language model as a filtering tool to select diverse and reasonable smoke images, thereby improving the quality of the synthetic dataset. Experiments showed that our generated smoke images are realistic and diverse, and effectively enhance the performance of forest fire smoke detection models. Code is available at https://github.com/wghr123/MFGDiffusion.
null
https://arxiv.org/abs/2507.11252v1
https://arxiv.org/pdf/2507.11252v1.pdf
null
[ "Guanghao Wu", "Chen Xu", "Hai Song", "Chong Wang", "Qixing Zhang" ]
[ "Fire Detection", "Image Generation", "Large Language Model", "Multimodal Large Language Model" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "Train a convolutional neural network to generate the contents of an arbitrary image region conditioned on its surroundings.", "full_name": "Inpainting", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Self-Supervised Learning** refers to a category of methods where we learn representations in a self-supervised way (i.e without labels). These methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. Below you can find a continuously updating list of self-supervised methods.", "name": "Self-Supervised Learning", "parent": null }, "name": "Inpainting", "source_title": "Context Encoders: Feature Learning by Inpainting", "source_url": "http://arxiv.org/abs/1604.07379v2" } ]
https://paperswithcode.com/paper/fairness-aware-grouping-for-continuous
2507.11247
null
null
Fairness-Aware Grouping for Continuous Sensitive Variables: Application for Debiasing Face Analysis with respect to Skin Tone
Within a legal framework, fairness in datasets and models is typically assessed by dividing observations into predefined groups and then computing fairness measures (e.g., Disparate Impact or Equality of Odds with respect to gender). However, when sensitive attributes such as skin color are continuous, dividing into default groups may overlook or obscure the discrimination experienced by certain minority subpopulations. To address this limitation, we propose a fairness-based grouping approach for continuous (possibly multidimensional) sensitive attributes. By grouping data according to observed levels of discrimination, our method identifies the partition that maximizes a novel criterion based on inter-group variance in discrimination, thereby isolating the most critical subgroups. We validate the proposed approach using multiple synthetic datasets and demonstrate its robustness under changing population distributions - revealing how discrimination is manifested within the space of sensitive attributes. Furthermore, we examine a specialized setting of monotonic fairness for the case of skin color. Our empirical results on both CelebA and FFHQ, leveraging the skin tone as predicted by an industrial proprietary algorithm, show that the proposed segmentation uncovers more nuanced patterns of discrimination than previously reported, and that these findings remain stable across datasets for a given model. Finally, we leverage our grouping model for debiasing purpose, aiming at predicting fair scores with group-by-group post-processing. The results demonstrate that our approach improves fairness while having minimal impact on accuracy, thus confirming our partition method and opening the door for industrial deployment.
null
https://arxiv.org/abs/2507.11247v1
https://arxiv.org/pdf/2507.11247v1.pdf
null
[ "Veronika Shilova", "Emmanuel Malherbe", "Giovanni Palma", "Laurent Risser", "Jean-Michel Loubes" ]
[ "Fairness" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/temperature-and-persona-shape-llm-agent
2507.11198
null
null
Temperature and Persona Shape LLM Agent Consensus With Minimal Accuracy Gains in Qualitative Coding
Large Language Models (LLMs) enable new possibilities for qualitative research at scale, including coding and data annotation. While multi-agent systems (MAS) can emulate human coding workflows, their benefits over single-agent coding remain poorly understood. We conducted an experimental study of how agent persona and temperature shape consensus-building and coding accuracy of dialog segments based on a codebook with 8 codes. Our open-source MAS mirrors deductive human coding through structured agent discussion and consensus arbitration. Using six open-source LLMs (with 3 to 32 billion parameters) and 18 experimental configurations, we analyze over 77,000 coding decisions against a gold-standard dataset of human-annotated transcripts from online math tutoring sessions. Temperature significantly impacted whether and when consensus was reached across all six LLMs. MAS with multiple personas (including neutral, assertive, or empathetic), significantly delayed consensus in four out of six LLMs compared to uniform personas. In three of those LLMs, higher temperatures significantly diminished the effects of multiple personas on consensus. However, neither temperature nor persona pairing lead to robust improvements in coding accuracy. Single agents matched or outperformed MAS consensus in most conditions. Only one model (OpenHermesV2:7B) and code category showed above-chance gains from MAS deliberation when temperature was 0.5 or lower and especially when the agents included at least one assertive persona. Qualitative analysis of MAS collaboration for these configurations suggests that MAS may nonetheless aid in narrowing ambiguous code applications that could improve codebooks and human-AI coding. We contribute new insight into the limits of LLM-based qualitative methods, challenging the notion that diverse MAS personas lead to better outcomes. We open-source our MAS and experimentation code.
null
https://arxiv.org/abs/2507.11198v1
https://arxiv.org/pdf/2507.11198v1.pdf
null
[ "Conrad Borchers", "Bahar Shahrokhian", "Francesco Balzan", "Elham Tajik", "Sreecharan Sankaranarayanan", "Sebastian Simon" ]
[ "Math" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "This optimizer mix [ADAM](https://paperswithcode.com/method/adam) and [SGD](https://paperswithcode.com/method/sgd) creating the MAS optimizer.", "full_name": "Mixing Adam and SGD", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Stochastic Optimization** methods are used to optimize neural networks. We typically take a mini-batch of data, hence 'stochastic', and perform a type of gradient descent with this minibatch. Below you can find a continuously updating list of stochastic optimization algorithms.", "name": "Stochastic Optimization", "parent": "Optimization" }, "name": "MAS", "source_title": "Mixing ADAM and SGD: a Combined Optimization Method", "source_url": "https://arxiv.org/abs/2011.08042v1" } ]
https://paperswithcode.com/paper/rmau-net-a-residual-multihead-attention-u-net
2507.11143
null
null
RMAU-NET: A Residual-Multihead-Attention U-Net Architecture for Landslide Segmentation and Detection from Remote Sensing Images
In recent years, landslide disasters have reported frequently due to the extreme weather events of droughts, floods , storms, or the consequence of human activities such as deforestation, excessive exploitation of natural resources. However, automatically observing landslide is challenging due to the extremely large observing area and the rugged topography such as mountain or highland. This motivates us to propose an end-to-end deep-learning-based model which explores the remote sensing images for automatically observing landslide events. By considering remote sensing images as the input data, we can obtain free resource, observe large and rough terrains by time. To explore the remote sensing images, we proposed a novel neural network architecture which is for two tasks of landslide detection and landslide segmentation. We evaluated our proposed model on three different benchmark datasets of LandSlide4Sense, Bijie, and Nepal. By conducting extensive experiments, we achieve F1 scores of 98.23, 93.83 for the landslide detection task on LandSlide4Sense, Bijie datasets; mIoU scores of 63.74, 76.88 on the segmentation tasks regarding LandSlide4Sense, Nepal datasets. These experimental results prove potential to integrate our proposed model into real-life landslide observation systems.
null
https://arxiv.org/abs/2507.11143v1
https://arxiv.org/pdf/2507.11143v1.pdf
null
[ "Lam Pham", "Cam Le", "Hieu Tang", "Khang Truong", "Truong Nguyen", "Jasmin Lampert", "Alexander Schindler", "Martin Boyer", "Son Phan" ]
[ "Landslide segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/try-harder-hard-sample-generation-and
2507.11119
null
null
Try Harder: Hard Sample Generation and Learning for Clothes-Changing Person Re-ID
Hard samples pose a significant challenge in person re-identification (ReID) tasks, particularly in clothing-changing person Re-ID (CC-ReID). Their inherent ambiguity or similarity, coupled with the lack of explicit definitions, makes them a fundamental bottleneck. These issues not only limit the design of targeted learning strategies but also diminish the model's robustness under clothing or viewpoint changes. In this paper, we propose a novel multimodal-guided Hard Sample Generation and Learning (HSGL) framework, which is the first effort to unify textual and visual modalities to explicitly define, generate, and optimize hard samples within a unified paradigm. HSGL comprises two core components: (1) Dual-Granularity Hard Sample Generation (DGHSG), which leverages multimodal cues to synthesize semantically consistent samples, including both coarse- and fine-grained hard positives and negatives for effectively increasing the hardness and diversity of the training data. (2) Hard Sample Adaptive Learning (HSAL), which introduces a hardness-aware optimization strategy that adjusts feature distances based on textual semantic labels, encouraging the separation of hard positives and drawing hard negatives closer in the embedding space to enhance the model's discriminative capability and robustness to hard samples. Extensive experiments on multiple CC-ReID benchmarks demonstrate the effectiveness of our approach and highlight the potential of multimodal-guided hard sample generation and learning for robust CC-ReID. Notably, HSAL significantly accelerates the convergence of the targeted learning procedure and achieves state-of-the-art performance on both PRCC and LTCC datasets. The code is available at https://github.com/undooo/TryHarder-ACMMM25.
null
https://arxiv.org/abs/2507.11119v1
https://arxiv.org/pdf/2507.11119v1.pdf
null
[ "Hankun Liu", "Yujian Zhao", "Guanglin Niu" ]
[ "Person Re-Identification" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/kptllm-towards-generic-keypoint-comprehension
2507.11102
null
null
KptLLM++: Towards Generic Keypoint Comprehension with Large Language Model
The emergence of Multimodal Large Language Models (MLLMs) has revolutionized image understanding by bridging textual and visual modalities. However, these models often struggle with capturing fine-grained semantic information, such as the precise identification and analysis of object keypoints. Keypoints, as structure-aware, pixel-level, and compact representations of objects, particularly articulated ones, play a crucial role in applications such as fine-grained image analysis, object retrieval, and behavior recognition. In this paper, we propose KptLLM++, a novel multimodal large language model that specifically designed for generic keypoint comprehension through the integration of diverse input modalities guided by user-defined instructions. By unifying keypoint detection across varied contexts, KptLLM++ establishes itself as an advanced interface, fostering more effective human-AI collaboration. The model is built upon a novel identify-then-detect paradigm, which first interprets keypoint semantics and subsequently localizes their precise positions through a structured chain-of-thought reasoning mechanism. To push the boundaries of performance, we have scaled up the training dataset to over 500K samples, encompassing diverse objects, keypoint categories, image styles, and scenarios with complex occlusions. This extensive scaling enables KptLLM++ to unlock its potential, achieving remarkable accuracy and generalization. Comprehensive experiments on multiple keypoint detection benchmarks demonstrate its state-of-the-art performance, underscoring its potential as a unified solution for fine-grained image understanding and its transformative implications for human-AI interaction.
null
https://arxiv.org/abs/2507.11102v1
https://arxiv.org/pdf/2507.11102v1.pdf
null
[ "Jie Yang", "Wang Zeng", "Sheng Jin", "Lumin Xu", "Wentao Liu", "Chen Qian", "Zhen Li", "Ruimao Zhang" ]
[ "Keypoint Detection", "Language Modeling", "Language Modelling", "Large Language Model", "Multimodal Large Language Model" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/beyond-traditional-algorithms-leveraging-llms
2507.11086
null
null
Beyond Traditional Algorithms: Leveraging LLMs for Accurate Cross-Border Entity Identification
The growing prevalence of cross-border financial activities in global markets has underscored the necessity of accurately identifying and classifying foreign entities. This practice is essential within the Spanish financial system for ensuring robust risk management, regulatory adherence, and the prevention of financial misconduct. This process involves a labor-intensive entity-matching task, where entities need to be validated against available reference sources. Challenges arise from linguistic variations, special characters, outdated names, and changes in legal forms, complicating traditional matching algorithms like Jaccard, cosine, and Levenshtein distances. These methods struggle with contextual nuances and semantic relationships, leading to mismatches. To address these limitations, we explore Large Language Models (LLMs) as a flexible alternative. LLMs leverage extensive training to interpret context, handle abbreviations, and adapt to legal transitions. We evaluate traditional methods, Hugging Face-based LLMs, and interface-based LLMs (e.g., Microsoft Copilot, Alibaba's Qwen 2.5) using a dataset of 65 Portuguese company cases. Results show traditional methods achieve accuracies over 92% but suffer high false positive rates (20-40%). Interface-based LLMs outperform, achieving accuracies above 93%, F1 scores exceeding 96%, and lower false positives (40-80%).
null
https://arxiv.org/abs/2507.11086v1
https://arxiv.org/pdf/2507.11086v1.pdf
null
[ "Andres Azqueta-Gavaldón", "Joaquin Ramos Cosgrove" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/tactical-decision-for-multi-ugv-confrontation
2507.11079
null
null
Tactical Decision for Multi-UGV Confrontation with a Vision-Language Model-Based Commander
In multiple unmanned ground vehicle confrontations, autonomously evolving multi-agent tactical decisions from situational awareness remain a significant challenge. Traditional handcraft rule-based methods become vulnerable in the complicated and transient battlefield environment, and current reinforcement learning methods mainly focus on action manipulation instead of strategic decisions due to lack of interpretability. Here, we propose a vision-language model-based commander to address the issue of intelligent perception-to-decision reasoning in autonomous confrontations. Our method integrates a vision language model for scene understanding and a lightweight large language model for strategic reasoning, achieving unified perception and decision within a shared semantic space, with strong adaptability and interpretability. Unlike rule-based search and reinforcement learning methods, the combination of the two modules establishes a full-chain process, reflecting the cognitive process of human commanders. Simulation and ablation experiments validate that the proposed approach achieves a win rate of over 80% compared with baseline models.
null
https://arxiv.org/abs/2507.11079v1
https://arxiv.org/pdf/2507.11079v1.pdf
null
[ "Li Wang", "Qizhen Wu", "Lei Chen" ]
[ "Language Modeling", "Language Modelling", "Large Language Model", "reinforcement-learning", "Reinforcement Learning", "Scene Understanding" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/robust-3d-masked-part-level-editing-in-3d
2507.11061
null
null
Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling
Recent advances in 3D neural representations and instance-level editing models have enabled the efficient creation of high-quality 3D content. However, achieving precise local 3D edits remains challenging, especially for Gaussian Splatting, due to inconsistent multi-view 2D part segmentations and inherently ambiguous nature of Score Distillation Sampling (SDS) loss. To address these limitations, we propose RoMaP, a novel local 3D Gaussian editing framework that enables precise and drastic part-level modifications. First, we introduce a robust 3D mask generation module with our 3D-Geometry Aware Label Prediction (3D-GALP), which uses spherical harmonics (SH) coefficients to model view-dependent label variations and soft-label property, yielding accurate and consistent part segmentations across viewpoints. Second, we propose a regularized SDS loss that combines the standard SDS loss with additional regularizers. In particular, an L1 anchor loss is introduced via our Scheduled Latent Mixing and Part (SLaMP) editing method, which generates high-quality part-edited 2D images and confines modifications only to the target region while preserving contextual coherence. Additional regularizers, such as Gaussian prior removal, further improve flexibility by allowing changes beyond the existing context, and robust 3D masking prevents unintended edits. Experimental results demonstrate that our RoMaP achieves state-of-the-art local 3D editing on both reconstructed and generated Gaussian scenes and objects qualitatively and quantitatively, making it possible for more robust and flexible part-level 3D Gaussian editing.
null
https://arxiv.org/abs/2507.11061v1
https://arxiv.org/pdf/2507.11061v1.pdf
null
[ "Hayeon Kim", "Ji Ha Jang", "Se Young Chun" ]
[ "3D geometry" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "We propose to theoretically and empirically examine the effect of incorporating weighting schemes into walk-aggregating GNNs. To this end, we propose a simple, interpretable, and end-to-end supervised GNN model, called AWARE (Attentive Walk-Aggregating GRaph Neural NEtwork), for graph-level prediction. AWARE aggregates the walk information by means of weighting schemes at distinct levels (vertex-, walk-, and graph-level) in a principled manner. By virtue of the incorporated weighting schemes at these different levels, AWARE can emphasize the information important for prediction while diminishing the irrelevant ones—leading to representations that can improve learning performance.", "full_name": "Attentive Walk-Aggregating Graph Neural Network", "introduced_year": 2000, "main_collection": { "area": "Graphs", "description": "", "name": "Graph Representation Learning", "parent": null }, "name": "AWARE", "source_title": null, "source_url": null } ]
https://paperswithcode.com/paper/personalized-exercise-recommendation-with
2507.11060
null
null
Personalized Exercise Recommendation with Semantically-Grounded Knowledge Tracing
We introduce ExRec, a general framework for personalized exercise recommendation with semantically-grounded knowledge tracing. Our method builds on the observation that existing exercise recommendation approaches simulate student performance via knowledge tracing (KT) but they often overlook two key aspects: (a) the semantic content of questions and (b) the sequential, structured progression of student learning. To address this, our ExRec presents an end-to-end pipeline, from annotating the KCs of questions and learning their semantic representations to training KT models and optimizing several reinforcement learning (RL) methods. Moreover, we improve standard Q-learning-based continuous RL methods via a tailored model-based value estimation (MVE) approach that directly leverages the components of KT model in estimating cumulative knowledge improvement. We validate the effectiveness of our ExRec using various RL methods across four real-world tasks with different educational goals in online math learning. We further show that ExRec generalizes robustly to new, unseen questions and that it produces interpretable student learning trajectories. Together, our findings highlight the promise of KT-guided RL for effective personalization in education.
null
https://arxiv.org/abs/2507.11060v1
https://arxiv.org/pdf/2507.11060v1.pdf
null
[ "Yilmazcan Ozyurt", "Tunaberk Almaci", "Stefan Feuerriegel", "Mrinmaya Sachan" ]
[ "Knowledge Tracing", "Math", "Q-Learning", "Reinforcement Learning (RL)" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/robust-multi-task-gradient-boosting
2507.11411
null
null
Robust-Multi-Task Gradient Boosting
Multi-task learning (MTL) has shown effectiveness in exploiting shared information across tasks to improve generalization. MTL assumes tasks share similarities that can improve performance. In addition, boosting algorithms have demonstrated exceptional performance across diverse learning problems, primarily due to their ability to focus on hard-to-learn instances and iteratively reduce residual errors. This makes them a promising approach for learning multi-task problems. However, real-world MTL scenarios often involve tasks that are not well-aligned (known as outlier or adversarial tasks), which do not share beneficial similarities with others and can, in fact, deteriorate the performance of the overall model. To overcome this challenge, we propose Robust-Multi-Task Gradient Boosting (R-MTGB), a novel boosting framework that explicitly models and adapts to task heterogeneity during training. R-MTGB structures the learning process into three sequential blocks: (1) learning shared patterns, (2) partitioning tasks into outliers and non-outliers with regularized parameters, and (3) fine-tuning task-specific predictors. This architecture enables R-MTGB to automatically detect and penalize outlier tasks while promoting effective knowledge transfer among related tasks. Our method integrates these mechanisms seamlessly within gradient boosting, allowing robust handling of noisy or adversarial tasks without sacrificing accuracy. Extensive experiments on both synthetic benchmarks and real-world datasets demonstrate that our approach successfully isolates outliers, transfers knowledge, and consistently reduces prediction errors for each task individually, and achieves overall performance gains across all tasks. These results highlight robustness, adaptability, and reliable convergence of R-MTGB in challenging MTL environments.
null
https://arxiv.org/abs/2507.11411v1
https://arxiv.org/pdf/2507.11411v1.pdf
null
[ "Seyedsaman Emami", "Gonzalo Martínez-Muñoz", "Daniel Hernández-Lobato" ]
[ "Multi-Task Learning", "Transfer Learning" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/optimal-sensor-scheduling-and-selection-for
2507.11240
null
null
Optimal Sensor Scheduling and Selection for Continuous-Discrete Kalman Filtering with Auxiliary Dynamics
We study the Continuous-Discrete Kalman Filter (CD-KF) for State-Space Models (SSMs) where continuous-time dynamics are observed via multiple sensors with discrete, irregularly timed measurements. Our focus extends to scenarios in which the measurement process is coupled with the states of an auxiliary SSM. For instance, higher measurement rates may increase energy consumption or heat generation, while a sensor's accuracy can depend on its own spatial trajectory or that of the measured target. Each sensor thus carries distinct costs and constraints associated with its measurement rate and additional constraints and costs on the auxiliary state. We model measurement occurrences as independent Poisson processes with sensor-specific rates and derive an upper bound on the mean posterior covariance matrix of the CD-KF along the mean auxiliary state. The bound is continuously differentiable with respect to the measurement rates, which enables efficient gradient-based optimization. Exploiting this bound, we propose a finite-horizon optimal control framework to optimize measurement rates and auxiliary-state dynamics jointly. We further introduce a deterministic method for scheduling measurement times from the optimized rates. Empirical results in state-space filtering and dynamic temporal Gaussian process regression demonstrate that our approach achieves improved trade-offs between resource usage and estimation accuracy.
null
https://arxiv.org/abs/2507.11240v1
https://arxiv.org/pdf/2507.11240v1.pdf
null
[ "Mohamad Al Ahdab", "John Leth", "Zheng-Hua Tan" ]
[ "Scheduling", "State Space Models" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Gaussian Processes** are non-parametric models for approximating functions. They rely upon a measure of similarity between points (the kernel function) to predict the value for an unseen point from training data. The models are fully probabilistic so uncertainty bounds are baked in with the model.\r\n\r\nImage Source: Gaussian Processes for Machine Learning, C. E. Rasmussen & C. K. I. Williams", "full_name": "Gaussian Process", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Non-Parametric Classification** methods perform classification where we use non-parametric methods to approximate the functional form of the relationship. Below you can find a continuously updating list of non-parametric classification methods.", "name": "Non-Parametric Classification", "parent": null }, "name": "Gaussian Process", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "", "full_name": "Focus", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Focus", "source_title": "Focus Your Attention (with Adaptive IIR Filters)", "source_url": "https://arxiv.org/abs/2305.14952v2" } ]
https://paperswithcode.com/paper/a-distance-metric-for-mixed-integer
2507.11063
null
null
A Distance Metric for Mixed Integer Programming Instances
Mixed-integer linear programming (MILP) is a powerful tool for addressing a wide range of real-world problems, but it lacks a clear structure for comparing instances. A reliable similarity metric could establish meaningful relationships between instances, enabling more effective evaluation of instance set heterogeneity and providing better guidance to solvers, particularly when machine learning is involved. Existing similarity metrics often lack precision in identifying instance classes or rely heavily on labeled data, which limits their applicability and generalization. To bridge this gap, this paper introduces the first mathematical distance metric for MILP instances, derived directly from their mathematical formulations. By discretizing right-hand sides, weights, and variables into classes, the proposed metric draws inspiration from the Earth mover's distance to quantify mismatches in weight-variable distributions for constraint comparisons. This approach naturally extends to enable instance-level comparisons. We evaluate both an exact and a greedy variant of our metric under various parameter settings, using the StrIPLIB dataset. Results show that all components of the metric contribute to class identification, and that the greedy version achieves accuracy nearly identical to the exact formulation while being nearly 200 times faster. Compared to state-of-the-art baselines, including feature-based, image-based, and neural network models, our unsupervised method consistently outperforms all non-learned approaches and rivals the performance of a supervised classifier on class and subclass grouping tasks.
null
https://arxiv.org/abs/2507.11063v1
https://arxiv.org/pdf/2507.11063v1.pdf
null
[ "Gwen Maudet", "Grégoire Danoy" ]
[]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/alleviating-textual-reliance-in-medical
2507.11055
null
null
Alleviating Textual Reliance in Medical Language-guided Segmentation via Prototype-driven Semantic Approximation
Medical language-guided segmentation, integrating textual clinical reports as auxiliary guidance to enhance image segmentation, has demonstrated significant improvements over unimodal approaches. However, its inherent reliance on paired image-text input, which we refer to as ``textual reliance", presents two fundamental limitations: 1) many medical segmentation datasets lack paired reports, leaving a substantial portion of image-only data underutilized for training; and 2) inference is limited to retrospective analysis of cases with paired reports, limiting its applicability in most clinical scenarios where segmentation typically precedes reporting. To address these limitations, we propose ProLearn, the first Prototype-driven Learning framework for language-guided segmentation that fundamentally alleviates textual reliance. At its core, we introduce a novel Prototype-driven Semantic Approximation (PSA) module to enable approximation of semantic guidance from textual input. PSA initializes a discrete and compact prototype space by distilling segmentation-relevant semantics from textual reports. Once initialized, it supports a query-and-respond mechanism which approximates semantic guidance for images without textual input, thereby alleviating textual reliance. Extensive experiments on QaTa-COV19, MosMedData+ and Kvasir-SEG demonstrate that ProLearn outperforms state-of-the-art language-guided methods when limited text is available.
null
https://arxiv.org/abs/2507.11055v2
https://arxiv.org/pdf/2507.11055v2.pdf
null
[ "Shuchang Ye", "Usman Naseem", "Mingyuan Meng", "Jinman Kim" ]
[ "Image Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/journalism-guided-agentic-in-context-learning
2507.11049
null
null
Journalism-Guided Agentic In-Context Learning for News Stance Detection
As online news consumption grows, personalized recommendation systems have become integral to digital journalism. However, these systems risk reinforcing filter bubbles and political polarization by failing to incorporate diverse perspectives. Stance detection -- identifying a text's position on a target -- can help mitigate this by enabling viewpoint-aware recommendations and data-driven analyses of media bias. Yet, existing stance detection research remains largely limited to short texts and high-resource languages. To address these gaps, we introduce \textsc{K-News-Stance}, the first Korean dataset for article-level stance detection, comprising 2,000 news articles with article-level and 19,650 segment-level stance annotations across 47 societal issues. We also propose \textsc{JoA-ICL}, a \textbf{Jo}urnalism-guided \textbf{A}gentic \textbf{I}n-\textbf{C}ontext \textbf{L}earning framework that employs a language model agent to predict the stances of key structural segments (e.g., leads, quotes), which are then aggregated to infer the overall article stance. Experiments show that \textsc{JoA-ICL} outperforms existing stance detection methods, highlighting the benefits of segment-level agency in capturing the overall position of long-form news articles. Two case studies further demonstrate its broader utility in promoting viewpoint diversity in news recommendations and uncovering patterns of media bias.
null
https://arxiv.org/abs/2507.11049v2
https://arxiv.org/pdf/2507.11049v2.pdf
null
[ "Dahyun Lee", "Jonghyeon Choi", "Jiyoung Han", "Kunwoo Park" ]
[ "Articles", "In-Context Learning", "Position", "Recommendation Systems", "Stance Detection" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-multi-view-high-resolution-foot-ankle
2507.11037
null
null
A Multi-View High-Resolution Foot-Ankle Complex Point Cloud Dataset During Gait for Occlusion-Robust 3D Completion
The kinematics analysis of foot-ankle complex during gait is essential for advancing biomechanical research and clinical assessment. Collecting accurate surface geometry data from the foot and ankle during dynamic gait conditions is inherently challenging due to swing foot occlusions and viewing limitations. Thus, this paper introduces FootGait3D, a novel multi-view dataset of high-resolution ankle-foot surface point clouds captured during natural gait. Different from existing gait datasets that typically target whole-body or lower-limb motion, FootGait3D focuses specifically on the detailed modeling of the ankle-foot region, offering a finer granularity of motion data. To address this, FootGait3D consists of 8,403 point cloud frames collected from 46 subjects using a custom five-camera depth sensing system. Each frame includes a complete 5-view reconstruction of the foot and ankle (serving as ground truth) along with partial point clouds obtained from only four, three, or two views. This structured variation enables rigorous evaluation of 3D point cloud completion methods under varying occlusion levels and viewpoints. Our dataset is designed for shape completion tasks, facilitating the benchmarking of state-of-the-art single-modal (e.g., PointTr, SnowflakeNet, Anchorformer) and multi-modal (e.g., SVDFormer, PointSea, CSDN) completion networks on the challenge of recovering the full foot geometry from occluded inputs. FootGait3D has significant potential to advance research in biomechanics and multi-segment foot modeling, offering a valuable testbed for clinical gait analysis, prosthetic design, and robotics applications requiring detailed 3D models of the foot during motion. The dataset is now available at https://huggingface.co/datasets/ljw285/FootGait3D.
null
https://arxiv.org/abs/2507.11037v1
https://arxiv.org/pdf/2507.11037v1.pdf
null
[ "Jie-Wen Li", "Zi-Han Ye", "Qingyuan Zhou", "Jiayi Song", "Ying He", "Ben Fei", "Wen-Ming Chen" ]
[ "Benchmarking", "Point Cloud Completion" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/personalized-ovss-understanding-personal
2507.11030
null
null
Personalized OVSS: Understanding Personal Concept in Open-Vocabulary Semantic Segmentation
While open-vocabulary semantic segmentation (OVSS) can segment an image into semantic regions based on arbitrarily given text descriptions even for classes unseen during training, it fails to understand personal texts (e.g., `my mug cup') for segmenting regions of specific interest to users. This paper addresses challenges like recognizing `my mug cup' among `multiple mug cups'. To overcome this challenge, we introduce a novel task termed \textit{personalized open-vocabulary semantic segmentation} and propose a text prompt tuning-based plug-in method designed to recognize personal visual concepts using a few pairs of images and masks, while maintaining the performance of the original OVSS. Based on the observation that reducing false predictions is essential when applying text prompt tuning to this task, our proposed method employs `negative mask proposal' that captures visual concepts other than the personalized concept. We further improve the performance by enriching the representation of text prompts by injecting visual embeddings of the personal concept into them. This approach enhances personalized OVSS without compromising the original OVSS performance. We demonstrate the superiority of our method on our newly established benchmarks for this task, including FSS$^\text{per}$, CUB$^\text{per}$, and ADE$^\text{per}$.
null
https://arxiv.org/abs/2507.11030v1
https://arxiv.org/pdf/2507.11030v1.pdf
null
[ "Sunghyun Park", "Jungsoo Lee", "Shubhankar Borse", "Munawar Hayat", "Sungha Choi", "Kyuwoong Hwang", "Fatih Porikli" ]
[ "Open Vocabulary Semantic Segmentation", "Open-Vocabulary Semantic Segmentation", "Semantic Segmentation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bridge-feature-matching-and-cross-modal
2507.11003
null
null
Bridge Feature Matching and Cross-Modal Alignment with Mutual-filtering for Zero-shot Anomaly Detection
With the advent of vision-language models (e.g., CLIP) in zero- and few-shot settings, CLIP has been widely applied to zero-shot anomaly detection (ZSAD) in recent research, where the rare classes are essential and expected in many applications. This study introduces \textbf{FiSeCLIP} for ZSAD with training-free \textbf{CLIP}, combining the feature matching with the cross-modal alignment. Testing with the entire dataset is impractical, while batch-based testing better aligns with real industrial needs, and images within a batch can serve as mutual reference points. Accordingly, FiSeCLIP utilizes other images in the same batch as reference information for the current image. However, the lack of labels for these references can introduce ambiguity, we apply text information to \textbf{fi}lter out noisy features. In addition, we further explore CLIP's inherent potential to restore its local \textbf{se}mantic correlation, adapting it for fine-grained anomaly detection tasks to enable a more accurate filtering process. Our approach exhibits superior performance for both anomaly classification and segmentation on anomaly detection benchmarks, building a stronger baseline for the direction, e.g., on MVTec-AD, FiSeCLIP outperforms the SOTA AdaCLIP by +4.6\%$\uparrow$/+5.7\%$\uparrow$ in segmentation metrics AU-ROC/$F_1$-max.
null
https://arxiv.org/abs/2507.11003v1
https://arxiv.org/pdf/2507.11003v1.pdf
null
[ "Yuhu Bai", "Jiangning Zhang", "Yunkang Cao", "Guangyuan Lu", "Qingdong He", "Xiangtai Li", "Guanzhong Tian" ]
[ "Anomaly Classification", "Anomaly Detection", "cross-modal alignment", "zero-shot anomaly detection" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/OpenAI/CLIP", "description": "**Contrastive Language-Image Pre-training** (**CLIP**), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes. \r\n\r\nFor pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores. \r\n\r\nImage credit: [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf)", "full_name": "Contrastive Language-Image Pre-training", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Representations", "parent": null }, "name": "CLIP", "source_title": "Learning Transferable Visual Models From Natural Language Supervision", "source_url": "https://arxiv.org/abs/2103.00020v1" } ]
https://paperswithcode.com/paper/pronunciation-deviation-analysis-through
2507.10985
null
null
Pronunciation Deviation Analysis Through Voice Cloning and Acoustic Comparison
This paper presents a novel approach for detecting mispronunciations by analyzing deviations between a user's original speech and their voice-cloned counterpart with corrected pronunciation. We hypothesize that regions with maximal acoustic deviation between the original and cloned utterances indicate potential mispronunciations. Our method leverages recent advances in voice cloning to generate a synthetic version of the user's voice with proper pronunciation, then performs frame-by-frame comparisons to identify problematic segments. Experimental results demonstrate the effectiveness of this approach in pinpointing specific pronunciation errors without requiring predefined phonetic rules or extensive training data for each target language.
null
https://arxiv.org/abs/2507.10985v1
https://arxiv.org/pdf/2507.10985v1.pdf
null
[ "Andrew Valdivia", "Yueming Zhang", "Hailu Xu", "Amir Ghasemkhani", "Xin Qin" ]
[ "Voice Cloning" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/diffusion-decoding-for-peptide-de-novo
2507.10955
null
null
Diffusion Decoding for Peptide De Novo Sequencing
Peptide de novo sequencing is a method used to reconstruct amino acid sequences from tandem mass spectrometry data without relying on existing protein sequence databases. Traditional deep learning approaches, such as Casanovo, mainly utilize autoregressive decoders and predict amino acids sequentially. Subsequently, they encounter cascading errors and fail to leverage high-confidence regions effectively. To address these issues, this paper investigates using diffusion decoders adapted for the discrete data domain. These decoders provide a different approach, allowing sequence generation to start from any peptide segment, thereby enhancing prediction accuracy. We experiment with three different diffusion decoder designs, knapsack beam search, and various loss functions. We find knapsack beam search did not improve performance metrics and simply replacing the transformer decoder with a diffusion decoder lowered performance. Although peptide precision and recall were still 0, the best diffusion decoder design with the DINOISER loss function obtained a statistically significant improvement in amino acid recall by 0.373 compared to the baseline autoregressive decoder-based Casanovo model. These findings highlight the potential of diffusion decoders to not only enhance model sensitivity but also drive significant advancements in peptide de novo sequencing.
null
https://arxiv.org/abs/2507.10955v1
https://arxiv.org/pdf/2507.10955v1.pdf
null
[ "Chi-en Amy Tai", "Alexander Wong" ]
[ "Decoder" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/langevin-flows-for-modeling-neural-latent
2507.11531
null
null
Langevin Flows for Modeling Neural Latent Dynamics
Neural populations exhibit latent dynamical structures that drive time-evolving spiking activities, motivating the search for models that capture both intrinsic network dynamics and external unobserved influences. In this work, we introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation. Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and stochastic forces -- to represent both autonomous and non-autonomous processes in neural systems. Crucially, the potential function is parameterized as a network of locally coupled oscillators, biasing the model toward oscillatory and flow-like behaviors observed in biological neural populations. Our model features a recurrent encoder, a one-layer Transformer decoder, and Langevin dynamics in the latent space. Empirically, our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor, closely matching ground-truth firing rates. On the Neural Latents Benchmark (NLB), the model achieves superior held-out neuron likelihoods (bits per spike) and forward prediction accuracy across four challenging datasets. It also matches or surpasses alternative methods in decoding behavioral metrics such as hand velocity. Overall, this work introduces a flexible, physics-inspired, high-performing framework for modeling complex neural population dynamics and their unobserved influences.
null
https://arxiv.org/abs/2507.11531v1
https://arxiv.org/pdf/2507.11531v1.pdf
null
[ "Yue Song", "T. Anderson Keller", "Yisong Yue", "Pietro Perona", "Max Welling" ]
[]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/google/jax/blob/7f3078b70d0ed9bea6228efa420879c56f72ef69/jax/experimental/stax.py#L271-L275", "description": "**Dropout** is a regularization technique for neural networks that drops a unit (along with connections) at training time with a specified probability $p$ (a common value is $p=0.5$). At test time, all units are present, but with weights scaled by $p$ (i.e. $w$ becomes $pw$).\r\n\r\nThe idea is to prevent co-adaptation, where the neural network becomes too reliant on particular connections, as this could be symptomatic of overfitting. Intuitively, dropout can be thought of as creating an implicit ensemble of neural networks.", "full_name": "Dropout", "introduced_year": 2000, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Dropout", "source_title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "source_url": "http://jmlr.org/papers/v15/srivastava14a.html" }, { "code_snippet_url": "", "description": "**Label Smoothing** is a regularization technique that introduces noise for the labels. This accounts for the fact that datasets may have mistakes in them, so maximizing the likelihood of $\\log{p}\\left(y\\mid{x}\\right)$ directly can be harmful. Assume for a small constant $\\epsilon$, the training set label $y$ is correct with probability $1-\\epsilon$ and incorrect otherwise. Label Smoothing regularizes a model based on a [softmax](https://paperswithcode.com/method/softmax) with $k$ output values by replacing the hard $0$ and $1$ classification targets with targets of $\\frac{\\epsilon}{k}$ and $1-\\frac{k-1}{k}\\epsilon$ respectively.\r\n\r\nSource: Deep Learning, Goodfellow et al\r\n\r\nImage Source: [When Does Label Smoothing Help?](https://arxiv.org/abs/1906.02629)", "full_name": "Label Smoothing", "introduced_year": 1985, "main_collection": { "area": "General", "description": "Regularization strategies are designed to reduce the test error of a machine learning algorithm, possibly at the expense of training error. Many different forms of regularization exist in the field of deep learning. Below you can find a constantly updating list of regularization strategies.", "name": "Regularization", "parent": null }, "name": "Label Smoothing", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "**Byte Pair Encoding**, or **BPE**, is a subword segmentation algorithm that encodes rare and unknown words as sequences of subword units. The intuition is that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations).\r\n\r\n[Lei Mao](https://leimao.github.io/blog/Byte-Pair-Encoding/) has a detailed blog post that explains how this works.", "full_name": "Byte Pair Encoding", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "", "name": "Subword Segmentation", "parent": null }, "name": "BPE", "source_title": "Neural Machine Translation of Rare Words with Subword Units", "source_url": "http://arxiv.org/abs/1508.07909v5" }, { "code_snippet_url": "", "description": "**Absolute Position Encodings** are a type of position embeddings for [[Transformer](https://paperswithcode.com/method/transformer)-based models] where positional encodings are added to the input embeddings at the bottoms of the encoder and decoder stacks. The positional encodings have the same dimension $d\\_{model}$ as the embeddings, so that the two can be summed. In the original implementation, sine and cosine functions of different frequencies are used:\r\n\r\n$$ \\text{PE}\\left(pos, 2i\\right) = \\sin\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\n$$ \\text{PE}\\left(pos, 2i+1\\right) = \\cos\\left(pos/10000^{2i/d\\_{model}}\\right) $$\r\n\r\nwhere $pos$ is the position and $i$ is the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from $2\\pi$ to $10000 \\dot 2\\pi$. This function was chosen because the authors hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $\\text{PE}\\_{pos+k}$ can be represented as a linear function of $\\text{PE}\\_{pos}$.\r\n\r\nImage Source: [D2L.ai](https://d2l.ai/chapter_attention-mechanisms/self-attention-and-positional-encoding.html)", "full_name": "Absolute Position Encodings", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Position Embeddings", "parent": null }, "name": "Absolute Position Encodings", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" }, { "code_snippet_url": "https://github.com/CyberZHG/torch-layer-normalization/blob/89f405b60f53f85da6f03fe685c190ef394ce50c/torch_layer_normalization/layer_normalization.py#L8", "description": "Unlike [batch normalization](https://paperswithcode.com/method/batch-normalization), **Layer Normalization** directly estimates the normalization statistics from the summed inputs to the neurons within a hidden layer so the normalization does not introduce any new dependencies between training cases. It works well for [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and improves both the training time and the generalization performance of several existing RNN models. More recently, it has been used with [Transformer](https://paperswithcode.com/methods/category/transformers) models.\r\n\r\nWe compute the layer normalization statistics over all the hidden units in the same layer as follows:\r\n\r\n$$ \\mu^{l} = \\frac{1}{H}\\sum^{H}\\_{i=1}a\\_{i}^{l} $$\r\n\r\n$$ \\sigma^{l} = \\sqrt{\\frac{1}{H}\\sum^{H}\\_{i=1}\\left(a\\_{i}^{l}-\\mu^{l}\\right)^{2}} $$\r\n\r\nwhere $H$ denotes the number of hidden units in a layer. Under layer normalization, all the hidden units in a layer share the same normalization terms $\\mu$ and $\\sigma$, but different training cases have different normalization terms. Unlike batch normalization, layer normalization does not impose any constraint on the size of the mini-batch and it can be used in the pure online regime with batch size 1.", "full_name": "Layer Normalization", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Normalization** layers in deep learning are used to make optimization easier by smoothing the loss surface of the network. Below you will find a continuously updating list of normalization methods.", "name": "Normalization", "parent": null }, "name": "Layer Normalization", "source_title": "Layer Normalization", "source_url": "http://arxiv.org/abs/1607.06450v1" }, { "code_snippet_url": null, "description": "**Dense Connections**, or **Fully Connected Connections**, are a type of layer in a deep neural network that use a linear operation where every input is connected to every output by a weight. This means there are $n\\_{\\text{inputs}}*n\\_{\\text{outputs}}$ parameters, which can lead to a lot of parameters for a sizeable network.\r\n\r\n$$h\\_{l} = g\\left(\\textbf{W}^{T}h\\_{l-1}\\right)$$\r\n\r\nwhere $g$ is an activation function.\r\n\r\nImage Source: Deep Learning by Goodfellow, Bengio and Courville", "full_name": "Dense Connections", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Feedforward Networks** are a type of neural network architecture which rely primarily on dense-like connections. Below you can find a continuously updating list of feedforward network components.", "name": "Feedforward Networks", "parent": null }, "name": "Dense Connections", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "The **Softmax** output function transforms a previous layer's output into a vector of probabilities. It is commonly used for multiclass classification. Given an input vector $x$ and a weighting vector $w$ we have:\r\n\r\n$$ P(y=j \\mid{x}) = \\frac{e^{x^{T}w_{j}}}{\\sum^{K}_{k=1}e^{x^{T}wk}} $$", "full_name": "Softmax", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Softmax", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/tunz/transformer-pytorch/blob/e7266679f0b32fd99135ea617213f986ceede056/model/transformer.py#L201", "description": "A **Transformer** is a model architecture that eschews recurrence and instead relies entirely on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1) to draw global dependencies between input and output. Before Transformers, the dominant sequence transduction models were based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The Transformer also employs an encoder and decoder, but removing recurrence in favor of [attention mechanisms](https://paperswithcode.com/methods/category/attention-mechanisms-1) allows for significantly more parallelization than methods like [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks) and [CNNs](https://paperswithcode.com/methods/category/convolutional-neural-networks).", "full_name": "Transformer", "introduced_year": 2000, "main_collection": { "area": "Natural Language Processing", "description": "**Transformers** are a type of neural network architecture that have several properties that make them effective for modeling data with long-range dependencies. They generally feature a combination of multi-headed attention mechanisms, residual connections, layer normalization, feedforward connections, and positional embeddings.", "name": "Transformers", "parent": "Language Models" }, "name": "Transformer", "source_title": "Attention Is All You Need", "source_url": "https://arxiv.org/abs/1706.03762v7" } ]
https://paperswithcode.com/paper/better-regret-rates-in-bilateral-trade-via
2507.11419
null
null
Better Regret Rates in Bilateral Trade via Sublinear Budget Violation
Bilateral trade is a central problem in algorithmic economics, and recent work has explored how to design trading mechanisms using no-regret learning algorithms. However, no-regret learning is impossible when budget balance has to be enforced at each time step. Bernasconi et al. [Ber+24] show how this impossibility can be circumvented by relaxing the budget balance constraint to hold only globally over all time steps. In particular, they design an algorithm achieving regret of the order of $\tilde O(T^{3/4})$ and provide a lower bound of $\Omega(T^{5/7})$. In this work, we interpolate between these two extremes by studying how the optimal regret rate varies with the allowed violation of the global budget balance constraint. Specifically, we design an algorithm that, by violating the constraint by at most $T^{\beta}$ for any given $\beta \in [\frac{3}{4}, \frac{6}{7}]$, attains regret $\tilde O(T^{1 - \beta/3})$. We complement this result with a matching lower bound, thus fully characterizing the trade-off between regret and budget violation. Our results show that both the $\tilde O(T^{3/4})$ upper bound in the global budget balance case and the $\Omega(T^{5/7})$ lower bound under unconstrained budget balance violation obtained by Bernasconi et al. [Ber+24] are tight.
null
https://arxiv.org/abs/2507.11419v1
https://arxiv.org/pdf/2507.11419v1.pdf
null
[ "Anna Lunghi", "Matteo Castiglioni", "Alberto Marchesi" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/local-pairwise-distance-matching-for
2507.11367
null
null
Local Pairwise Distance Matching for Backpropagation-Free Reinforcement Learning
Training neural networks with reinforcement learning (RL) typically relies on backpropagation (BP), necessitating storage of activations from the forward pass for subsequent backward updates. Furthermore, backpropagating error signals through multiple layers often leads to vanishing or exploding gradients, which can degrade learning performance and stability. We propose a novel approach that trains each layer of the neural network using local signals during the forward pass in RL settings. Our approach introduces local, layer-wise losses leveraging the principle of matching pairwise distances from multi-dimensional scaling, enhanced with optional reward-driven guidance. This method allows each hidden layer to be trained using local signals computed during forward propagation, thus eliminating the need for backward passes and storing intermediate activations. Our experiments, conducted with policy gradient methods across common RL benchmarks, demonstrate that this backpropagation-free method achieves competitive performance compared to their classical BP-based counterpart. Additionally, the proposed method enhances stability and consistency within and across runs, and improves performance especially in challenging environments.
null
https://arxiv.org/abs/2507.11367v1
https://arxiv.org/pdf/2507.11367v1.pdf
null
[ "Daniel Tanneberg" ]
[ "Policy Gradient Methods", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/guiding-llm-decision-making-with-fairness
2507.11344
null
null
Guiding LLM Decision-Making with Fairness Reward Models
Large language models are increasingly used to support high-stakes decisions, potentially influencing who is granted bail or receives a loan. Naive chain-of-thought sampling can improve average decision accuracy, but has also been shown to amplify unfair bias. To address this challenge and enable the trustworthy use of reasoning models in high-stakes decision-making, we propose a framework for training a generalizable Fairness Reward Model (FRM). Our model assigns a fairness score to LLM reasoning, enabling the system to down-weight biased trajectories and favor equitable ones when aggregating decisions across reasoning chains. We show that a single Fairness Reward Model, trained on weakly supervised, LLM-annotated examples of biased versus unbiased reasoning, transfers across tasks, domains, and model families without additional fine-tuning. Applied to real-world decision-making tasks including recidivism prediction and social media moderation, we show that our approach consistently improves fairness while matching, or even surpassing, baseline accuracy.
null
https://arxiv.org/abs/2507.11344v1
https://arxiv.org/pdf/2507.11344v1.pdf
null
[ "Zara Hall", "Melanie Subbiah", "Thomas P Zollo", "Kathleen McKeown", "Richard Zemel" ]
[ "Decision Making", "Fairness" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gknet-graph-based-keypoints-network-for
2507.11077
null
null
GKNet: Graph-based Keypoints Network for Monocular Pose Estimation of Non-cooperative Spacecraft
Monocular pose estimation of non-cooperative spacecraft is significant for on-orbit service (OOS) tasks, such as satellite maintenance, space debris removal, and station assembly. Considering the high demands on pose estimation accuracy, mainstream monocular pose estimation methods typically consist of keypoint detectors and PnP solver. However, current keypoint detectors remain vulnerable to structural symmetry and partial occlusion of non-cooperative spacecraft. To this end, we propose a graph-based keypoints network for the monocular pose estimation of non-cooperative spacecraft, GKNet, which leverages the geometric constraint of keypoints graph. In order to better validate keypoint detectors, we present a moderate-scale dataset for the spacecraft keypoint detection, named SKD, which consists of 3 spacecraft targets, 90,000 simulated images, and corresponding high-precise keypoint annotations. Extensive experiments and an ablation study have demonstrated the high accuracy and effectiveness of our GKNet, compared to the state-of-the-art spacecraft keypoint detectors. The code for GKNet and the SKD dataset is available at https://github.com/Dongzhou-1996/GKNet.
null
https://arxiv.org/abs/2507.11077v1
https://arxiv.org/pdf/2507.11077v1.pdf
null
[ "Weizhao Ma", "Dong Zhou", "Yuhui Hu", "Zipeng He" ]
[ "Keypoint Detection", "Pose Estimation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "**PnP**, or **Poll and Pool**, is sampling module extension for [DETR](https://paperswithcode.com/method/detr)-type architectures that adaptively allocates its computation spatially to be more efficient. Concretely, the PnP module abstracts the image feature map into fine foreground object feature vectors and a small number of coarse background contextual feature vectors. The [transformer](https://paperswithcode.com/method/transformer) models information interaction within the fine-coarse feature space and translates the features into the detection result.", "full_name": "PnP", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "**Image Model Blocks** are building blocks used in image models such as convolutional neural networks. Below you can find a continuously updating list of image model blocks.", "name": "Image Model Blocks", "parent": null }, "name": "PnP", "source_title": "PnP-DETR: Towards Efficient Visual Analysis with Transformers", "source_url": "https://arxiv.org/abs/2109.07036v4" } ]
https://paperswithcode.com/paper/joint-angle-model-based-learning-to-refine
2507.11075
null
null
Joint angle model based learning to refine kinematic human pose estimation
Marker-free human pose estimation (HPE) has found increasing applications in various fields. Current HPE suffers from occasional errors in keypoint recognition and random fluctuation in keypoint trajectories when analyzing kinematic human poses. The performance of existing deep learning-based models for HPE refinement is considerably limited by inaccurate training datasets in which the keypoints are manually annotated. This paper proposed a novel method to overcome the difficulty through joint angle-based modeling. The key techniques include: (i) A joint angle-based model of human pose, which is robust to describe kinematic human poses; (ii) Approximating temporal variation of joint angles through high order Fourier series to get reliable "ground truth"; (iii) A bidirectional recurrent network is designed as a post-processing module to refine the estimation of well-established HRNet. Trained with the high-quality dataset constructed using our method, the network demonstrates outstanding performance to correct wrongly recognized joints and smooth their spatiotemporal trajectories. Tests show that joint angle-based refinement (JAR) outperforms the state-of-the-art HPE refinement network in challenging cases like figure skating and breaking.
null
https://arxiv.org/abs/2507.11075v1
https://arxiv.org/pdf/2507.11075v1.pdf
null
[ "Chang Peng", "Yifei Zhou", "Huifeng Xi", "Shiqing Huang", "Chuangye Chen", "Jianming Yang", "Bao Yang", "Zhenyu Jiang" ]
[ "Pose Estimation" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "How Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!\r\n\r\n\r\nHow Do I Communicate to Expedia?\r\nHow Do I Communicate to Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Live Support & Special Travel Discounts!Frustrated with automated systems? Call **☎️ **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to speak directly with a live Expedia agent and unlock exclusive best deal discounts on hotels, flights, and vacation packages. Get real help fast while enjoying limited-time offers that make your next trip more affordable, smooth, and stress-free. Don’t wait—call today!", "full_name": "*Communicated@Fast*How Do I Communicate to Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "ReLU", "source_title": null, "source_url": null }, { "code_snippet_url": "", "description": "A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output.\r\n\r\nIntuitively, a convolution allows for weight sharing - reducing the number of effective parameters - and image translation (allowing for the same feature to be detected in different parts of the input space).\r\n\r\nImage Source: [https://arxiv.org/pdf/1603.07285.pdf](https://arxiv.org/pdf/1603.07285.pdf)", "full_name": "Convolution", "introduced_year": 1980, "main_collection": { "area": "Computer Vision", "description": "**Convolutions** are a type of operation that can be used to learn representations from images. They involve a learnable kernel sliding over the image and performing element-wise multiplication with the input. The specification allows for parameter sharing and translation invariance. Below you can find a continuously updating list of convolutions.", "name": "Convolutions", "parent": "Image Feature Extractors" }, "name": "Convolution", "source_title": null, "source_url": null }, { "code_snippet_url": "https://github.com/HRNet/HRNet-Image-Classification/blob/8f158719e821836e21e6cba99a3241a12a13bc41/lib/models/cls_hrnet.py#L254", "description": "**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution [convolution](https://paperswithcode.com/method/convolution) stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and\r\nthe $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.", "full_name": "HRNet", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "If you have questions or want to make special travel arrangements, you can make them online or call ☎️+1-801-(855)-(5905)or +1-804-853-9001✅. For hearing or speech impaired assistance dial 711 to be connected through the National Relay Service.", "name": "Convolutional Neural Networks", "parent": "Image Models" }, "name": "HRNet", "source_title": "Deep High-Resolution Representation Learning for Visual Recognition", "source_url": "https://arxiv.org/abs/1908.07919v2" } ]
https://paperswithcode.com/paper/fpc-net-revisiting-superpoint-with-descriptor
2507.10770
null
null
FPC-Net: Revisiting SuperPoint with Descriptor-Free Keypoint Detection via Feature Pyramids and Consistency-Based Implicit Matching
The extraction and matching of interest points are fundamental to many geometric computer vision tasks. Traditionally, matching is performed by assigning descriptors to interest points and identifying correspondences based on descriptor similarity. This work introduces a technique where interest points are inherently associated during detection, eliminating the need for computing, storing, transmitting, or matching descriptors. Although the matching accuracy is marginally lower than that of conventional approaches, our method completely eliminates the need for descriptors, leading to a drastic reduction in memory usage for localization systems. We assess its effectiveness by comparing it against both classical handcrafted methods and modern learned approaches.
null
https://arxiv.org/abs/2507.10770v1
https://arxiv.org/pdf/2507.10770v1.pdf
null
[ "Ionuţ Grigore", "Călin-Adrian Popa", "Claudiu Leoveanu-Condrei" ]
[ "Keypoint Detection" ]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/4d-animal-freely-reconstructing-animatable-3d
2507.10437
null
null
4D-Animal: Freely Reconstructing Animatable 3D Animals from Videos
Existing methods for reconstructing animatable 3D animals from videos typically rely on sparse semantic keypoints to fit parametric models. However, obtaining such keypoints is labor-intensive, and keypoint detectors trained on limited animal data are often unreliable. To address this, we propose 4D-Animal, a novel framework that reconstructs animatable 3D animals from videos without requiring sparse keypoint annotations. Our approach introduces a dense feature network that maps 2D representations to SMAL parameters, enhancing both the efficiency and stability of the fitting process. Furthermore, we develop a hierarchical alignment strategy that integrates silhouette, part-level, pixel-level, and temporal cues from pre-trained 2D visual models to produce accurate and temporally coherent reconstructions across frames. Extensive experiments demonstrate that 4D-Animal outperforms both model-based and model-free baselines. Moreover, the high-quality 3D assets generated by our method can benefit other 3D tasks, underscoring its potential for large-scale applications. The code is released at https://github.com/zhongshsh/4D-Animal.
null
https://arxiv.org/abs/2507.10437v1
https://arxiv.org/pdf/2507.10437v1.pdf
null
[ "Shanshan Zhong", "Jiawei Peng", "Zehan Zheng", "Zhongzhan Huang", "Wufei Ma", "Guofeng Zhang", "Qihao Liu", "Alan Yuille", "Jieneng Chen" ]
[]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/natural-language-based-assessment-of-l2-oral
2507.10200
null
null
Natural Language-based Assessment of L2 Oral Proficiency using LLMs
Natural language-based assessment (NLA) is an approach to second language assessment that uses instructions - expressed in the form of can-do descriptors - originally intended for human examiners, aiming to determine whether large language models (LLMs) can interpret and apply them in ways comparable to human assessment. In this work, we explore the use of such descriptors with an open-source LLM, Qwen 2.5 72B, to assess responses from the publicly available S&I Corpus in a zero-shot setting. Our results show that this approach - relying solely on textual information - achieves competitive performance: while it does not outperform state-of-the-art speech LLMs fine-tuned for the task, it surpasses a BERT-based model trained specifically for this purpose. NLA proves particularly effective in mismatched task settings, is generalisable to other data types and languages, and offers greater interpretability, as it is grounded in clearly explainable, widely applicable language descriptors.
null
https://arxiv.org/abs/2507.10200v1
https://arxiv.org/pdf/2507.10200v1.pdf
null
[ "Stefano Bannò", "Rao Ma", "Mengjie Qian", "Siyuan Tang", "Kate Knill", "Mark Gales" ]
[]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lightweight-model-for-poultry-disease
2507.10056
null
null
Lightweight Model for Poultry Disease Detection from Fecal Images Using Multi-Color Space Feature Optimization and Machine Learning
Poultry farming is a vital component of the global food supply chain, yet it remains highly vulnerable to infectious diseases such as coccidiosis, salmonellosis, and Newcastle disease. This study proposes a lightweight machine learning-based approach to detect these diseases by analyzing poultry fecal images. We utilize multi-color space feature extraction (RGB, HSV, LAB) and explore a wide range of color, texture, and shape-based descriptors, including color histograms, local binary patterns (LBP), wavelet transforms, and edge detectors. Through a systematic ablation study and dimensionality reduction using PCA and XGBoost feature selection, we identify a compact global feature set that balances accuracy and computational efficiency. An artificial neural network (ANN) classifier trained on these features achieved 95.85% accuracy while requiring no GPU and only 638 seconds of execution time in Google Colab. Compared to deep learning models such as Xception and MobileNetV3, our proposed model offers comparable accuracy with drastically lower resource usage. This work demonstrates a cost-effective, interpretable, and scalable alternative to deep learning for real-time poultry disease detection in low-resource agricultural settings.
null
https://arxiv.org/abs/2507.10056v1
https://arxiv.org/pdf/2507.10056v1.pdf
null
[ "A. K. M. Shoriful Islam", "Md. Rakib Hassan", "Macbah Uddin", "Md. Shahidur Rahman" ]
[ "Computational Efficiency", "Dimensionality Reduction", "feature selection", "GPU" ]
2025-07-14T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Principle Components Analysis (PCA)** is an unsupervised method primary used for dimensionality reduction within machine learning. PCA is calculated via a singular value decomposition (SVD) of the design matrix, or alternatively, by calculating the covariance matrix of the data and performing eigenvalue decomposition on the covariance matrix. The results of PCA provide a low-dimensional picture of the structure of the data and the leading (uncorrelated) latent factors determining variation in the data.\r\n\r\nImage Source: [Wikipedia](https://en.wikipedia.org/wiki/Principal_component_analysis#/media/File:GaussianScatterPCA.svg)", "full_name": "Principal Components Analysis", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Dimensionality Reduction** methods transform data from a high-dimensional space into a low-dimensional space so that the low-dimensional space retains the most important properties of the original data. Below you can find a continuously updating list of dimensionality reduction methods.", "name": "Dimensionality Reduction", "parent": null }, "name": "PCA", "source_title": null, "source_url": null }, { "code_snippet_url": null, "description": "Dynamic Sparse Training method where weight mask is updated randomly periodically", "full_name": "Sparse Evolutionary Training", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Sparsity", "parent": null }, "name": "SET", "source_title": "Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science", "source_url": "http://arxiv.org/abs/1707.04780v2" } ]
https://paperswithcode.com/paper/vst-pose-a-velocity-integrated-spatiotem
2507.09672
null
null
VST-Pose: A Velocity-Integrated Spatiotem-poral Attention Network for Human WiFi Pose Estimation
WiFi-based human pose estimation has emerged as a promising non-visual alternative approaches due to its pene-trability and privacy advantages. This paper presents VST-Pose, a novel deep learning framework for accurate and continuous pose estimation using WiFi channel state information. The proposed method introduces ViSTA-Former, a spatiotemporal attention backbone with dual-stream architecture that adopts a dual-stream architecture to separately capture temporal dependencies and structural relationships among body joints. To enhance sensitivity to subtle human motions, a velocity modeling branch is integrated into the framework, which learns short-term keypoint dis-placement patterns and improves fine-grained motion representation. We construct a 2D pose dataset specifically designed for smart home care scenarios and demonstrate that our method achieves 92.2% accuracy on the PCK@50 metric, outperforming existing methods by 8.3% in PCK@50 on the self-collected dataset. Further evaluation on the public MMFi dataset confirms the model's robustness and effectiveness in 3D pose estimation tasks. The proposed system provides a reliable and privacy-aware solution for continuous human motion analysis in indoor environments. Our codes are available in https://github.com/CarmenQing/VST-Pose.
null
https://arxiv.org/abs/2507.09672v1
https://arxiv.org/pdf/2507.09672v1.pdf
null
[ "Xinyu Zhang", "Zhonghao Ye", "Jingwei Zhang", "Xiang Tian", "Zhisheng Liang", "Shipeng Yu" ]
[ "3D Pose Estimation", "Pose Estimation" ]
2025-07-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/posellm-enhancing-language-guided-human-pose
2507.09139
null
null
PoseLLM: Enhancing Language-Guided Human Pose Estimation with MLP Alignment
Human pose estimation traditionally relies on architectures that encode keypoint priors, limiting their generalization to novel poses or unseen keypoints. Recent language-guided approaches like LocLLM reformulate keypoint localization as a vision-language task, enabling zero-shot generalization through textual descriptions. However, LocLLM's linear projector fails to capture complex spatial-textual interactions critical for high-precision localization. To address this, we propose PoseLLM, the first Large Language Model (LLM)-based pose estimation framework that replaces the linear projector with a nonlinear MLP vision-language connector. This lightweight two-layer MLP with GELU activation enables hierarchical cross-modal feature transformation, enhancing the fusion of visual patches and textual keypoint descriptions. Trained exclusively on COCO data, PoseLLM achieves 77.8 AP on the COCO validation set, outperforming LocLLM by +0.4 AP, while maintaining strong zero-shot generalization on Human-Art and MPII. Our work demonstrates that a simple yet powerful nonlinear connector significantly boosts localization accuracy without sacrificing generalization, advancing the state-of-the-art in language-guided pose estimation. Code is available at https://github.com/Ody-trek/PoseLLM.
null
https://arxiv.org/abs/2507.09139v1
https://arxiv.org/pdf/2507.09139v1.pdf
null
[ "Dewen Zhang", "Tahir Hussain", "Wangpeng An", "Hayaru Shouno" ]
[ "Large Language Model", "Pose Estimation", "Zero-shot Generalization" ]
2025-07-12T00:00:00
null
null
null
null
[ { "code_snippet_url": "", "description": "“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!\r\n\r\n\r\n“How do I get a full refund from Expedia?\r\nHow do I get a full refund from Expedia? – Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** for Quick Help & Exclusive Travel Deals!Have a question about your booking? Call **☎️ +1-(888) 829 (0881) or +1-805-330-4056 or +1-805-330-4056** now to get live, expert support and unlock exclusive best deal discounts on flights, hotels, and vacation packages. Get clear answers fast and access limited-time travel offers that make your next trip easier, cheaper, and stress-free. Don’t wait—call today and save!", "full_name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "introduced_year": 2000, "main_collection": { "area": "General", "description": "How do I escalate a problem with Expedia?\r\nTo escalate a problem with Expedia, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask to speak with a manager. Explain your issue in detail and inquire about compensation. Expedia may provide exclusive discount codes, travel credits, or special offers to help resolve your problem and improve your experience.\r\nIs Expedia actually fully refundable?\r\nExpedia isn’t always fully refundable—refunds depend on the hotel, airline, or rental provider’s policy call +1(888) (829) (0881) OR +1(805) (330) (4056). Look for “Free Cancellation” before booking to ensure flexibility. For peace of mind and potential savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about current discount codes or refund-friendly deals.\r\n\r\nWhat is the refundable option on expedia?\r\nThe refundable option on Expedia allows you to cancel eligible bookings call +1(888) (829) (0881) OR +1(805) (330) (4056) without penalty. Look for listings marked “Free Cancellation” or “Fully Refundable.” To maximize flexibility, choose these options during checkout. For additional savings, call +1(888) (829) (0881) OR +1(805) (330) (4056) and ask about exclusive promo codes or travel discounts available today.", "name": "Activation Functions", "parent": null }, "name": "Refunds@Expedia|||How do I get a full refund from Expedia?", "source_title": "Gaussian Error Linear Units (GELUs)", "source_url": "https://arxiv.org/abs/1606.08415v5" } ]
https://paperswithcode.com/paper/an-efficient-approach-for-muscle-segmentation
2507.08690
null
null
An Efficient Approach for Muscle Segmentation and 3D Reconstruction Using Keypoint Tracking in MRI Scan
Magnetic resonance imaging (MRI) enables non-invasive, high-resolution analysis of muscle structures. However, automated segmentation remains limited by high computational costs, reliance on large training datasets, and reduced accuracy in segmenting smaller muscles. Convolutional neural network (CNN)-based methods, while powerful, often suffer from substantial computational overhead, limited generalizability, and poor interpretability across diverse populations. This study proposes a training-free segmentation approach based on keypoint tracking, which integrates keypoint selection with Lucas-Kanade optical flow. The proposed method achieves a mean Dice similarity coefficient (DSC) ranging from 0.6 to 0.7, depending on the keypoint selection strategy, performing comparably to state-of-the-art CNN-based models while substantially reducing computational demands and enhancing interpretability. This scalable framework presents a robust and explainable alternative for muscle segmentation in clinical and research applications.
null
https://arxiv.org/abs/2507.08690v1
https://arxiv.org/pdf/2507.08690v1.pdf
null
[ "Mengyuan Liu", "Jeongkyu Lee" ]
[ "3D Reconstruction", "Optical Flow Estimation", "Segmentation" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/radiomicsretrieval-a-customizable-framework
2507.08546
null
null
RadiomicsRetrieval: A Customizable Framework for Medical Image Retrieval Using Radiomics Features
Medical image retrieval is a valuable field for supporting clinical decision-making, yet current methods primarily support 2D images and require fully annotated queries, limiting clinical flexibility. To address this, we propose RadiomicsRetrieval, a 3D content-based retrieval framework bridging handcrafted radiomics descriptors with deep learning-based embeddings at the tumor level. Unlike existing 2D approaches, RadiomicsRetrieval fully exploits volumetric data to leverage richer spatial context in medical images. We employ a promptable segmentation model (e.g., SAM) to derive tumor-specific image embeddings, which are aligned with radiomics features extracted from the same tumor via contrastive learning. These representations are further enriched by anatomical positional embedding (APE). As a result, RadiomicsRetrieval enables flexible querying based on shape, location, or partial feature sets. Extensive experiments on both lung CT and brain MRI public datasets demonstrate that radiomics features significantly enhance retrieval specificity, while APE provides global anatomical context essential for location-based searches. Notably, our framework requires only minimal user prompts (e.g., a single point), minimizing segmentation overhead and supporting diverse clinical scenarios. The capability to query using either image embeddings or selected radiomics attributes highlights its adaptability, potentially benefiting diagnosis, treatment planning, and research on large-scale medical imaging repositories. Our code is available at https://github.com/nainye/RadiomicsRetrieval.
null
https://arxiv.org/abs/2507.08546v1
https://arxiv.org/pdf/2507.08546v1.pdf
null
[ "Inye Na", "Nejung Rue", "Jiwon Chung", "HyunJin Park" ]
[ "Contrastive Learning", "Image Retrieval", "Medical Image Retrieval", "Retrieval", "Specificity" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/amplyze-a-deep-learning-model-for-predicting
2507.08162
null
null
AmpLyze: A Deep Learning Model for Predicting the Hemolytic Concentration
Red-blood-cell lysis (HC50) is the principal safety barrier for antimicrobial-peptide (AMP) therapeutics, yet existing models only say "toxic" or "non-toxic." AmpLyze closes this gap by predicting the actual HC50 value from sequence alone and explaining the residues that drive toxicity. The model couples residue-level ProtT5/ESM2 embeddings with sequence-level descriptors in dual local and global branches, aligned by a cross-attention module and trained with log-cosh loss for robustness to assay noise. The optimal AmpLyze model reaches a PCC of 0.756 and an MSE of 0.987, outperforming classical regressors and the state-of-the-art. Ablations confirm that both branches are essential, and cross-attention adds a further 1% PCC and 3% MSE improvement. Expected-Gradients attributions reveal known toxicity hotspots and suggest safer substitutions. By turning hemolysis assessment into a quantitative, sequence-based, and interpretable prediction, AmpLyze facilitates AMP design and offers a practical tool for early-stage toxicity screening.
null
https://arxiv.org/abs/2507.08162v1
https://arxiv.org/pdf/2507.08162v1.pdf
null
[ "Peng Qiu", "Hanqi Feng", "Barnabas Poczos" ]
[]
2025-07-10T00:00:00
null
null
null
null
[ { "code_snippet_url": "https://github.com/hiyouga/AMP-Regularizer", "description": "Based on the understanding that the flat local minima of the empirical risk cause the model to generalize better. Adversarial Model Perturbation (AMP) improves generalization via minimizing the **AMP loss**, which is obtained from the empirical risk by applying the **worst** norm-bounded perturbation on each point in the parameter space.", "full_name": "Adversarial Model Perturbation", "introduced_year": 2000, "main_collection": { "area": "General", "description": "", "name": "Optimization", "parent": null }, "name": "AMP", "source_title": "Regularizing Neural Networks via Adversarial Model Perturbation", "source_url": "https://arxiv.org/abs/2010.04925v4" } ]
https://paperswithcode.com/paper/doodle-your-keypoints-sketch-based-few-shot
2507.07994
null
null
Doodle Your Keypoints: Sketch-Based Few-Shot Keypoint Detection
Keypoint detection, integral to modern machine perception, faces challenges in few-shot learning, particularly when source data from the same distribution as the query is unavailable. This gap is addressed by leveraging sketches, a popular form of human expression, providing a source-free alternative. However, challenges arise in mastering cross-modal embeddings and handling user-specific sketch styles. Our proposed framework overcomes these hurdles with a prototypical setup, combined with a grid-based locator and prototypical domain adaptation. We also demonstrate success in few-shot convergence across novel keypoints and classes through extensive experiments.
null
https://arxiv.org/abs/2507.07994v2
https://arxiv.org/pdf/2507.07994v2.pdf
null
[ "Subhajit Maity", "Ayan Kumar Bhunia", "Subhadeep Koley", "Pinaki Nath Chowdhury", "Aneeshan Sain", "Yi-Zhe Song" ]
[ "Domain Adaptation", "Few-Shot Learning", "Keypoint Detection" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/attend-and-refine-interactive-keypoint
2507.07670
null
null
Attend-and-Refine: Interactive keypoint estimation and quantitative cervical vertebrae analysis for bone age assessment
In pediatric orthodontics, accurate estimation of growth potential is essential for developing effective treatment strategies. Our research aims to predict this potential by identifying the growth peak and analyzing cervical vertebra morphology solely through lateral cephalometric radiographs. We accomplish this by comprehensively analyzing cervical vertebral maturation (CVM) features from these radiographs. This methodology provides clinicians with a reliable and efficient tool to determine the optimal timings for orthodontic interventions, ultimately enhancing patient outcomes. A crucial aspect of this approach is the meticulous annotation of keypoints on the cervical vertebrae, a task often challenged by its labor-intensive nature. To mitigate this, we introduce Attend-and-Refine Network (ARNet), a user-interactive, deep learning-based model designed to streamline the annotation process. ARNet features Interaction-guided recalibration network, which adaptively recalibrates image features in response to user feedback, coupled with a morphology-aware loss function that preserves the structural consistency of keypoints. This novel approach substantially reduces manual effort in keypoint identification, thereby enhancing the efficiency and accuracy of the process. Extensively validated across various datasets, ARNet demonstrates remarkable performance and exhibits wide-ranging applicability in medical imaging. In conclusion, our research offers an effective AI-assisted diagnostic tool for assessing growth potential in pediatric orthodontics, marking a significant advancement in the field.
null
https://arxiv.org/abs/2507.07670v1
https://arxiv.org/pdf/2507.07670v1.pdf
null
[ "Jinhee Kim", "Taesung Kim", "Taewoo Kim", "Dong-Wook Kim", "Byungduk Ahn", "Yoon-Ji Kim", "In-Seok Song", "Jaegul Choo" ]
[ "Diagnostic", "Keypoint Estimation" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/keyre-id-keypoint-guided-person-re
2507.07393
null
null
KeyRe-ID: Keypoint-Guided Person Re-Identification using Part-Aware Representation in Videos
We propose \textbf{KeyRe-ID}, a keypoint-guided video-based person re-identification framework consisting of global and local branches that leverage human keypoints for enhanced spatiotemporal representation learning. The global branch captures holistic identity semantics through Transformer-based temporal aggregation, while the local branch dynamically segments body regions based on keypoints to generate fine-grained, part-aware features. Extensive experiments on MARS and iLIDS-VID benchmarks demonstrate state-of-the-art performance, achieving 91.73\% mAP and 97.32\% Rank-1 accuracy on MARS, and 96.00\% Rank-1 and 100.0\% Rank-5 accuracy on iLIDS-VID. The code for this work will be publicly available on GitHub upon publication.
null
https://arxiv.org/abs/2507.07393v3
https://arxiv.org/pdf/2507.07393v3.pdf
null
[ "Jinseong Kim", "Jeonghoon Song", "Gyeongseon Baek", "Byeongjoon Noh" ]
[ "Person Re-Identification", "Representation Learning", "Video-Based Person Re-Identification" ]
2025-07-10T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/reading-a-ruler-in-the-wild
2507.07077
null
null
Reading a Ruler in the Wild
Accurately converting pixel measurements into absolute real-world dimensions remains a fundamental challenge in computer vision and limits progress in key applications such as biomedicine, forensics, nutritional analysis, and e-commerce. We introduce RulerNet, a deep learning framework that robustly infers scale "in the wild" by reformulating ruler reading as a unified keypoint-detection problem and by representing the ruler with geometric-progression parameters that are invariant to perspective transformations. Unlike traditional methods that rely on handcrafted thresholds or rigid, ruler-specific pipelines, RulerNet directly localizes centimeter marks using a distortion-invariant annotation and training strategy, enabling strong generalization across diverse ruler types and imaging conditions while mitigating data scarcity. We also present a scalable synthetic-data pipeline that combines graphics-based ruler generation with ControlNet to add photorealistic context, greatly increasing training diversity and improving performance. To further enhance robustness and efficiency, we propose DeepGP, a lightweight feed-forward network that regresses geometric-progression parameters from noisy marks and eliminates iterative optimization, enabling real-time scale estimation on mobile or edge devices. Experiments show that RulerNet delivers accurate, consistent, and efficient scale estimates under challenging real-world conditions. These results underscore its utility as a generalizable measurement tool and its potential for integration with other vision components for automated, scale-aware analysis in high-impact domains. A live demo is available at https://huggingface.co/spaces/ymp5078/RulerNet-Demo.
null
https://arxiv.org/abs/2507.07077v1
https://arxiv.org/pdf/2507.07077v1.pdf
null
[ "Yimu Pan", "Manas Mehta", "Gwen Sincerbeaux", "Jeffery A. Goldstein", "Alison D. Gernand", "James Z. Wang" ]
[ "Keypoint Detection" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mk-pose-category-level-object-pose-estimation
2507.06662
null
null
MK-Pose: Category-Level Object Pose Estimation via Multimodal-Based Keypoint Learning
Category-level object pose estimation, which predicts the pose of objects within a known category without prior knowledge of individual instances, is essential in applications like warehouse automation and manufacturing. Existing methods relying on RGB images or point cloud data often struggle with object occlusion and generalization across different instances and categories. This paper proposes a multimodal-based keypoint learning framework (MK-Pose) that integrates RGB images, point clouds, and category-level textual descriptions. The model uses a self-supervised keypoint detection module enhanced with attention-based query generation, soft heatmap matching and graph-based relational modeling. Additionally, a graph-enhanced feature fusion module is designed to integrate local geometric information and global context. MK-Pose is evaluated on CAMERA25 and REAL275 dataset, and is further tested for cross-dataset capability on HouseCat6D dataset. The results demonstrate that MK-Pose outperforms existing state-of-the-art methods in both IoU and average precision without shape priors. Codes will be released at \href{https://github.com/yangyifanYYF/MK-Pose}{https://github.com/yangyifanYYF/MK-Pose}.
null
https://arxiv.org/abs/2507.06662v1
https://arxiv.org/pdf/2507.06662v1.pdf
null
[ "Yifan Yang", "Peili Song", "Enfan Lan", "Dong Liu", "Jingtai Liu" ]
[ "Keypoint Detection", "Pose Estimation" ]
2025-07-09T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Heatmap", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Heatmap", "source_title": "Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation", "source_url": "http://arxiv.org/abs/1406.2984v2" } ]
https://paperswithcode.com/paper/learning-from-sparse-point-labels-for-dense
2507.06643
null
null
Learning from Sparse Point Labels for Dense Carcinosis Localization in Advanced Ovarian Cancer Assessment
Learning from sparse labels is a challenge commonplace in the medical domain. This is due to numerous factors, such as annotation cost, and is especially true for newly introduced tasks. When dense pixel-level annotations are needed, this becomes even more unfeasible. However, being able to learn from just a few annotations at the pixel-level, while extremely difficult and underutilized, can drive progress in studies where perfect annotations are not immediately available. This work tackles the challenge of learning the dense prediction task of keypoint localization from a few point annotations in the context of 2d carcinosis keypoint localization from laparoscopic video frames for diagnostic planning of advanced ovarian cancer patients. To enable this, we formulate the problem as a sparse heatmap regression from a few point annotations per image and propose a new loss function, called Crag and Tail loss, for efficient learning. Our proposed loss function effectively leverages positive sparse labels while minimizing the impact of false negatives or missed annotations. Through an extensive ablation study, we demonstrate the effectiveness of our approach in achieving accurate dense localization of carcinosis keypoints, highlighting its potential to advance research in scenarios where dense annotations are challenging to obtain.
null
https://arxiv.org/abs/2507.06643v1
https://arxiv.org/pdf/2507.06643v1.pdf
null
[ "Farahdiba Zarin", "Riccardo Oliva", "Vinkle Srivastav", "Armine Vardazaryan", "Andrea Rosati", "Alice Zampolini Faustini", "Giovanni Scambia", "Anna Fagotti", "Pietro Mascagni", "Nicolas Padoy" ]
[ "Diagnostic" ]
2025-07-09T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "", "full_name": "Heatmap", "introduced_year": 2000, "main_collection": { "area": "General", "description": "**Output functions** are layers used towards the end of a network to transform to the desired form for a loss function. For example, the softmax relies on logits to construct a conditional probability. Below you can find a continuously updating list of output functions.", "name": "Output Functions", "parent": null }, "name": "Heatmap", "source_title": "Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation", "source_url": "http://arxiv.org/abs/1406.2984v2" } ]
https://paperswithcode.com/paper/ilnet-trajectory-prediction-with-inverse
2507.06531
null
null
ILNet: Trajectory Prediction with Inverse Learning Attention for Enhancing Intention Capture
Trajectory prediction for multi-agent interaction scenarios is a crucial challenge. Most advanced methods model agent interactions by efficiently factorized attention based on the temporal and agent axes. However, this static and foward modeling lacks explicit interactive spatio-temporal coordination, capturing only obvious and immediate behavioral intentions. Alternatively, the modern trajectory prediction framework refines the successive predictions by a fixed-anchor selection strategy, which is difficult to adapt in different future environments. It is acknowledged that human drivers dynamically adjust initial driving decisions based on further assumptions about the intentions of surrounding vehicles. Motivated by human driving behaviors, this paper proposes ILNet, a multi-agent trajectory prediction method with Inverse Learning (IL) attention and Dynamic Anchor Selection (DAS) module. IL Attention employs an inverse learning paradigm to model interactions at neighboring moments, introducing proposed intentions to dynamically encode the spatio-temporal coordination of interactions, thereby enhancing the model's ability to capture complex interaction patterns. Then, the learnable DAS module is proposed to extract multiple trajectory change keypoints as anchors in parallel with almost no increase in parameters. Experimental results show that the ILNet achieves state-of-the-art performance on the INTERACTION and Argoverse motion forecasting datasets. Particularly, in challenged interaction scenarios, ILNet achieves higher accuracy and more multimodal distributions of trajectories over fewer parameters. Our codes are available at https://github.com/mjZeng11/ILNet.
null
https://arxiv.org/abs/2507.06531v1
https://arxiv.org/pdf/2507.06531v1.pdf
null
[ "Mingjin Zeng", "Nan Ouyang", "Wenkang Wan", "Lei Ao", "Qing Cai", "Kai Sheng" ]
[ "Motion Forecasting", "Trajectory Prediction" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/airllm-diffusion-policy-based-adaptive-lora
2507.11515
null
null
AirLLM: Diffusion Policy-based Adaptive LoRA for Remote Fine-Tuning of LLM over the Air
Operating Large Language Models (LLMs) on edge devices is increasingly challenged by limited communication bandwidth and strained computational and memory costs. Thus, cloud-assisted remote fine-tuning becomes indispensable. Nevertheless, existing Low-Rank Adaptation (LoRA) approaches typically employ fixed or heuristic rank configurations, and the subsequent over-the-air transmission of all LoRA parameters could be rather inefficient. To address this limitation, we develop AirLLM, a hierarchical diffusion policy framework for communication-aware LoRA adaptation. Specifically, AirLLM models the rank configuration as a structured action vector that spans all LoRA-inserted projections. To solve the underlying high-dimensional sequential decision-making problem, a Proximal Policy Optimization (PPO) agent generates coarse-grained decisions by jointly observing wireless states and linguistic complexity, which are then refined via Denoising Diffusion Implicit Models (DDIM) to produce high-resolution, task- and channel-adaptive rank vectors. The two modules are optimized alternatively, with the DDIM trained under the Classifier-Free Guidance (CFG) paradigm to maintain alignment with PPO rewards. Experiments under varying signal-to-noise ratios demonstrate that AirLLM consistently enhances fine-tuning performance while significantly reducing transmission costs, highlighting the effectiveness of reinforcement-driven, diffusion-refined rank adaptation for scalable and efficient remote fine-tuning over the air.
null
https://arxiv.org/abs/2507.11515v1
https://arxiv.org/pdf/2507.11515v1.pdf
null
[ "Shiyi Yang", "Xiaoxue Yu", "Rongpeng Li", "Jianhang Zhu", "Zhifeng Zhao", "Honggang Zhang" ]
[ "Denoising", "Sequential Decision Making" ]
2025-07-15T00:00:00
null
null
null
null
[ { "code_snippet_url": null, "description": "**Proximal Policy Optimization**, or **PPO**, is a policy gradient method for reinforcement learning. The motivation was to have an algorithm with the data efficiency and reliable performance of [TRPO](https://paperswithcode.com/method/trpo), while using only first-order optimization. \r\n\r\nLet $r\\_{t}\\left(\\theta\\right)$ denote the probability ratio $r\\_{t}\\left(\\theta\\right) = \\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}$, so $r\\left(\\theta\\_{old}\\right) = 1$. TRPO maximizes a “surrogate” objective:\r\n\r\n$$ L^{\\text{CPI}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\frac{\\pi\\_{\\theta}\\left(a\\_{t}\\mid{s\\_{t}}\\right)}{\\pi\\_{\\theta\\_{old}}\\left(a\\_{t}\\mid{s\\_{t}}\\right)})\\hat{A}\\_{t}\\right] = \\hat{\\mathbb{E}}\\_{t}\\left[r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}\\right] $$\r\n\r\nWhere $CPI$ refers to a conservative policy iteration. Without a constraint, maximization of $L^{CPI}$ would lead to an excessively large policy update; hence, we PPO modifies the objective, to penalize changes to the policy that move $r\\_{t}\\left(\\theta\\right)$ away from 1:\r\n\r\n$$ J^{\\text{CLIP}}\\left({\\theta}\\right) = \\hat{\\mathbb{E}}\\_{t}\\left[\\min\\left(r\\_{t}\\left(\\theta\\right)\\hat{A}\\_{t}, \\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}\\right)\\right] $$\r\n\r\nwhere $\\epsilon$ is a hyperparameter, say, $\\epsilon = 0.2$. The motivation for this objective is as follows. The first term inside the min is $L^{CPI}$. The second term, $\\text{clip}\\left(r\\_{t}\\left(\\theta\\right), 1-\\epsilon, 1+\\epsilon\\right)\\hat{A}\\_{t}$ modifies the surrogate\r\nobjective by clipping the probability ratio, which removes the incentive for moving $r\\_{t}$ outside of the interval $\\left[1 − \\epsilon, 1 + \\epsilon\\right]$. Finally, we take the minimum of the clipped and unclipped objective, so the final objective is a lower bound (i.e., a pessimistic bound) on the unclipped objective. With this scheme, we only ignore the change in probability ratio when it would make the objective improve, and we include it when it makes the objective worse. \r\n\r\nOne detail to note is that when we apply PPO for a network where we have shared parameters for actor and critic functions, we typically add to the objective function an error term on value estimation and an entropy term to encourage exploration.", "full_name": "Proximal Policy Optimization", "introduced_year": 2000, "main_collection": { "area": "Reinforcement Learning", "description": "**Policy Gradient Methods** try to optimize the policy function directly in reinforcement learning. This contrasts with, for example, Q-Learning, where the policy manifests itself as maximizing a value function. Below you can find a continuously updating catalog of policy gradient methods.", "name": "Policy Gradient Methods", "parent": null }, "name": "PPO", "source_title": "Proximal Policy Optimization Algorithms", "source_url": "http://arxiv.org/abs/1707.06347v2" }, { "code_snippet_url": null, "description": "Diffusion models generate samples by gradually\r\nremoving noise from a signal, and their training objective can be expressed as a reweighted variational lower-bound (https://arxiv.org/abs/2006.11239).", "full_name": "Diffusion", "introduced_year": 2000, "main_collection": { "area": "Computer Vision", "description": "", "name": "Image Generation Models", "parent": null }, "name": "Diffusion", "source_title": "Denoising Diffusion Probabilistic Models", "source_url": "https://arxiv.org/abs/2006.11239v2" } ]
https://paperswithcode.com/paper/towards-depth-foundation-model-recent-trends
2507.11540
null
null
Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation
Depth estimation is a fundamental task in 3D computer vision, crucial for applications such as 3D reconstruction, free-viewpoint rendering, robotics, autonomous driving, and AR/VR technologies. Traditional methods relying on hardware sensors like LiDAR are often limited by high costs, low resolution, and environmental sensitivity, limiting their applicability in real-world scenarios. Recent advances in vision-based methods offer a promising alternative, yet they face challenges in generalization and stability due to either the low-capacity model architectures or the reliance on domain-specific and small-scale datasets. The emergence of scaling laws and foundation models in other domains has inspired the development of "depth foundation models": deep neural networks trained on large datasets with strong zero-shot generalization capabilities. This paper surveys the evolution of deep learning architectures and paradigms for depth estimation across the monocular, stereo, multi-view, and monocular video settings. We explore the potential of these models to address existing challenges and provide a comprehensive overview of large-scale datasets that can facilitate their development. By identifying key architectures and training strategies, we aim to highlight the path towards robust depth foundation models, offering insights into their future research and applications.
null
https://arxiv.org/abs/2507.11540v1
https://arxiv.org/pdf/2507.11540v1.pdf
null
[ "Zhen Xu", "HongYu Zhou", "Sida Peng", "Haotong Lin", "Haoyu Guo", "Jiahao Shao", "Peishan Yang", "Qinglin Yang", "Sheng Miao", "Xingyi He", "Yifan Wang", "Yue Wang", "Ruizhen Hu", "Yiyi Liao", "Xiaowei Zhou", "Hujun Bao" ]
[ "3D Reconstruction", "Autonomous Driving", "Depth Estimation", "Zero-shot Generalization" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/joint-space-time-wind-field-data
2507.11385
null
null
Joint space-time wind field data extrapolation and uncertainty quantification using nonparametric Bayesian dictionary learning
A methodology is developed, based on nonparametric Bayesian dictionary learning, for joint space-time wind field data extrapolation and estimation of related statistics by relying on limited/incomplete measurements. Specifically, utilizing sparse/incomplete measured data, a time-dependent optimization problem is formulated for determining the expansion coefficients of an associated low-dimensional representation of the stochastic wind field. Compared to an alternative, standard, compressive sampling treatment of the problem, the developed methodology exhibits the following advantages. First, the Bayesian formulation enables also the quantification of the uncertainty in the estimates. Second, the requirement in standard CS-based applications for an a priori selection of the expansion basis is circumvented. Instead, this is done herein in an adaptive manner based on the acquired data. Overall, the methodology exhibits enhanced extrapolation accuracy, even in cases of high-dimensional data of arbitrary form, and of relatively large extrapolation distances. Thus, it can be used, potentially, in a wide range of wind engineering applications where various constraints dictate the use of a limited number of sensors. The efficacy of the methodology is demonstrated by considering two case studies. The first relates to the extrapolation of simulated wind velocity records consistent with a prescribed joint wavenumber-frequency power spectral density in a three-dimensional domain (2D and time). The second pertains to the extrapolation of four-dimensional (3D and time) boundary layer wind tunnel experimental data that exhibit significant spatial variability and non-Gaussian characteristics.
null
https://arxiv.org/abs/2507.11385v1
https://arxiv.org/pdf/2507.11385v1.pdf
null
[ "George D. Pasparakis", "Ioannis A. Kougioumtzoglou", "Michael D. Shields" ]
[ "Dictionary Learning", "Uncertainty Quantification" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/flsim-a-modular-and-library-agnostic
2507.11430
null
null
FLsim: A Modular and Library-Agnostic Simulation Framework for Federated Learning
Federated Learning (FL) has undergone significant development since its inception in 2016, advancing from basic algorithms to complex methodologies tailored to address diverse challenges and use cases. However, research and benchmarking of novel FL techniques against a plethora of established state-of-the-art solutions remain challenging. To streamline this process, we introduce FLsim, a comprehensive FL simulation framework designed to meet the diverse requirements of FL workflows in the literature. FLsim is characterized by its modularity, scalability, resource efficiency, and controlled reproducibility of experimental outcomes. Its easy to use interface allows users to specify customized FL requirements through job configuration, which supports: (a) customized data distributions, ranging from non-independent and identically distributed (non-iid) data to independent and identically distributed (iid) data, (b) selection of local learning algorithms according to user preferences, with complete agnosticism to ML libraries, (c) choice of network topology illustrating communication patterns among nodes, (d) definition of model aggregation and consensus algorithms, and (e) pluggable blockchain support for enhanced robustness. Through a series of experimental evaluations, we demonstrate the effectiveness and versatility of FLsim in simulating a diverse range of state-of-the-art FL experiments. We envisage that FLsim would mark a significant advancement in FL simulation frameworks, offering unprecedented flexibility and functionality for researchers and practitioners alike.
null
https://arxiv.org/abs/2507.11430v1
https://arxiv.org/pdf/2507.11430v1.pdf
null
[ "Arnab Mukherjee", "Raju Halder", "Joydeep Chandra" ]
[ "Benchmarking", "Federated Learning" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-neural-network-model-of-complementary
2507.11393
null
null
A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning
Learning new information without forgetting prior knowledge is central to human intelligence. In contrast, neural network models suffer from catastrophic forgetting: a significant degradation in performance on previously learned tasks when acquiring new information. The Complementary Learning Systems (CLS) theory offers an explanation for this human ability, proposing that the brain has distinct systems for pattern separation (encoding distinct memories) and pattern completion (retrieving complete memories from partial cues). To capture these complementary functions, we leverage the representational generalization capabilities of variational autoencoders (VAEs) and the robust memory storage properties of Modern Hopfield networks (MHNs), combining them into a neurally plausible continual learning model. We evaluate this model on the Split-MNIST task, a popular continual learning benchmark, and achieve close to state-of-the-art accuracy (~90%), substantially reducing forgetting. Representational analyses empirically confirm the functional dissociation: the VAE underwrites pattern completion, while the MHN drives pattern separation. By capturing pattern separation and completion in scalable architectures, our work provides a functional template for modeling memory consolidation, generalization, and continual learning in both biological and artificial systems.
null
https://arxiv.org/abs/2507.11393v1
https://arxiv.org/pdf/2507.11393v1.pdf
null
[ "James P Jun", "Vijay Marupudi", "Raj Sanjay Shah", "Sashank Varma" ]
[ "Continual Learning", "Split-MNIST" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/canonical-bayesian-linear-system
2507.11535
null
null
Canonical Bayesian Linear System Identification
Standard Bayesian approaches for linear time-invariant (LTI) system identification are hindered by parameter non-identifiability; the resulting complex, multi-modal posteriors make inference inefficient and impractical. We solve this problem by embedding canonical forms of LTI systems within the Bayesian framework. We rigorously establish that inference in these minimal parameterizations fully captures all invariant system dynamics (e.g., transfer functions, eigenvalues, predictive distributions of system outputs) while resolving identifiability. This approach unlocks the use of meaningful, structure-aware priors (e.g., enforcing stability via eigenvalues) and ensures conditions for a Bernstein--von Mises theorem -- a link between Bayesian and frequentist large-sample asymptotics that is broken in standard forms. Extensive simulations with modern MCMC methods highlight advantages over standard parameterizations: canonical forms achieve higher computational efficiency, generate interpretable and well-behaved posteriors, and provide robust uncertainty estimates, particularly from limited data.
null
https://arxiv.org/abs/2507.11535v1
https://arxiv.org/pdf/2507.11535v1.pdf
null
[ "Andrey Bryutkin", "Matthew E. Levine", "Iñigo Urteaga", "Youssef Marzouk" ]
[ "Computational Efficiency" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/elk-exploring-the-efficiency-of-inter-core
2507.11506
null
null
Elk: Exploring the Efficiency of Inter-core Connected AI Chips with Deep Learning Compiler Techniques
To meet the increasing demand of deep learning (DL) models, AI chips are employing both off-chip memory (e.g., HBM) and high-bandwidth low-latency interconnect for direct inter-core data exchange. However, it is not easy to explore the efficiency of these inter-core connected AI (ICCA) chips, due to a fundamental tussle among compute (per-core execution), communication (inter-core data exchange), and I/O (off-chip data access). In this paper, we develop Elk, a DL compiler framework to maximize the efficiency of ICCA chips by jointly trading off all the three performance factors discussed above. Elk structures these performance factors into configurable parameters and forms a global trade-off space in the DL compiler. To systematically explore this space and maximize overall efficiency, Elk employs a new inductive operator scheduling policy and a cost-aware on-chip memory allocation algorithm. It generates globally optimized execution plans that best overlap off-chip data loading and on-chip execution. To examine the efficiency of Elk, we build a full-fledged emulator based on a real ICCA chip IPU-POD4, and an ICCA chip simulator for sensitivity analysis with different interconnect network topologies. Elk achieves 94% of the ideal roofline performance of ICCA chips on average, showing the benefits of supporting large DL models on ICCA chips. We also show Elk's capability of enabling architecture design space exploration for new ICCA chip development.
null
https://arxiv.org/abs/2507.11506v1
https://arxiv.org/pdf/2507.11506v1.pdf
null
[ "Yiqi Liu", "Yuqi Xue", "Noelle Crawford", "Jilong Xue", "Jian Huang" ]
[ "Scheduling" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/chain-of-thought-monitorability-a-new-and
2507.11473
null
null
Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
AI systems that "think" in human language offer a unique opportunity for AI safety: we can monitor their chains of thought (CoT) for the intent to misbehave. Like all other known AI oversight methods, CoT monitoring is imperfect and allows some misbehavior to go unnoticed. Nevertheless, it shows promise and we recommend further research into CoT monitorability and investment in CoT monitoring alongside existing safety methods. Because CoT monitorability may be fragile, we recommend that frontier model developers consider the impact of development decisions on CoT monitorability.
null
https://arxiv.org/abs/2507.11473v1
https://arxiv.org/pdf/2507.11473v1.pdf
null
[ "Tomek Korbak", "Mikita Balesni", "Elizabeth Barnes", "Yoshua Bengio", "Joe Benton", "Joseph Bloom", "Mark Chen", "Alan Cooney", "Allan Dafoe", "Anca Dragan", "Scott Emmons", "Owain Evans", "David Farhi", "Ryan Greenblatt", "Dan Hendrycks", "Marius Hobbhahn", "Evan Hubinger", "Geoffrey Irving", "Erik Jenner", "Daniel Kokotajlo", "Victoria Krakovna", "Shane Legg", "David Lindner", "David Luan", "Aleksander Mądry", "Julian Michael", "Neel Nanda", "Dave Orr", "Jakub Pachocki", "Ethan Perez", "Mary Phuong", "Fabien Roger", "Joshua Saxe", "Buck Shlegeris", "Martín Soto", "Eric Steinberger", "Jasmine Wang", "Wojciech Zaremba", "Bowen Baker", "Rohin Shah", "Vlad Mikulik" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/d3fl-data-distribution-and-detrending-for
2507.11471
null
null
D3FL: Data Distribution and Detrending for Robust Federated Learning in Non-linear Time-series Data
With advancements in computing and communication technologies, the Internet of Things (IoT) has seen significant growth. IoT devices typically collect data from various sensors, such as temperature, humidity, and energy meters. Much of this data is temporal in nature. Traditionally, data from IoT devices is centralized for analysis, but this approach introduces delays and increased communication costs. Federated learning (FL) has emerged as an effective alternative, allowing for model training across distributed devices without the need to centralize data. In many applications, such as smart home energy and environmental monitoring, the data collected by IoT devices across different locations can exhibit significant variation in trends and seasonal patterns. Accurately forecasting such non-stationary, non-linear time-series data is crucial for applications like energy consumption estimation and weather forecasting. However, these data variations can severely impact prediction accuracy. The key contributions of this paper are: (1) Investigating how non-linear, non-stationary time-series data distributions, like generalized extreme value (gen-extreme) and log norm distributions, affect FL performance. (2) Analyzing how different detrending techniques for non-linear time-series data influence the forecasting model's performance in a FL setup. We generated several synthetic time-series datasets using non-linear data distributions and trained an LSTM-based forecasting model using both centralized and FL approaches. Additionally, we evaluated the impact of detrending on real-world datasets with non-linear time-series data distributions. Our experimental results show that: (1) FL performs worse than centralized approaches when dealing with non-linear data distributions. (2) The use of appropriate detrending techniques improves FL performance, reducing loss across different data distributions.
null
https://arxiv.org/abs/2507.11471v1
https://arxiv.org/pdf/2507.11471v1.pdf
null
[ "Harsha Varun Marisetty", "Manik Gupta", "Yogesh Simmhan" ]
[ "Federated Learning", "Time Series", "Weather Forecasting" ]
2025-07-15T00:00:00
null
null
null
null
[]